I0508 10:50:57.115353 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0508 10:50:57.115601 7 e2e.go:124] Starting e2e run "4f14be6b-7651-411f-a0bb-821a1da97ee2" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588935056 - Will randomize all specs Will run 275 of 4992 specs May 8 10:50:57.170: INFO: >>> kubeConfig: /root/.kube/config May 8 10:50:57.174: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 8 10:50:57.195: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 10:50:57.238: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 10:50:57.238: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 8 10:50:57.238: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 8 10:50:57.246: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 8 10:50:57.246: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 8 10:50:57.246: INFO: e2e test version: v1.18.2 May 8 10:50:57.246: INFO: kube-apiserver version: v1.18.2 May 8 10:50:57.247: INFO: >>> kubeConfig: /root/.kube/config May 8 10:50:57.250: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:50:57.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test May 8 10:50:57.329: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 10:50:57.336: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f30ddb75-a876-44ce-9f4b-7913f3bd8e5f" in namespace "security-context-test-6296" to be "Succeeded or Failed" May 8 10:50:57.359: INFO: Pod "alpine-nnp-false-f30ddb75-a876-44ce-9f4b-7913f3bd8e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.881851ms May 8 10:50:59.371: INFO: Pod "alpine-nnp-false-f30ddb75-a876-44ce-9f4b-7913f3bd8e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035131946s May 8 10:51:01.375: INFO: Pod "alpine-nnp-false-f30ddb75-a876-44ce-9f4b-7913f3bd8e5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038815968s May 8 10:51:01.375: INFO: Pod "alpine-nnp-false-f30ddb75-a876-44ce-9f4b-7913f3bd8e5f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:51:01.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6296" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":7,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:51:01.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:51:17.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6057" for this suite. • [SLOW TEST:16.502 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":2,"skipped":17,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:51:17.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-ea576147-269e-442a-9633-8b89eac62290 STEP: Creating a pod to test consume secrets May 8 10:51:18.142: INFO: Waiting up to 5m0s for pod "pod-secrets-1ed0f232-bd1c-4653-9e53-e0d2803dbf63" in namespace "secrets-3152" to be "Succeeded or Failed" May 8 10:51:18.194: INFO: Pod "pod-secrets-1ed0f232-bd1c-4653-9e53-e0d2803dbf63": Phase="Pending", Reason="", readiness=false. Elapsed: 52.512954ms May 8 10:51:20.199: INFO: Pod "pod-secrets-1ed0f232-bd1c-4653-9e53-e0d2803dbf63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057266534s May 8 10:51:22.204: INFO: Pod "pod-secrets-1ed0f232-bd1c-4653-9e53-e0d2803dbf63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062039528s STEP: Saw pod success May 8 10:51:22.204: INFO: Pod "pod-secrets-1ed0f232-bd1c-4653-9e53-e0d2803dbf63" satisfied condition "Succeeded or Failed" May 8 10:51:22.207: INFO: Trying to get logs from node kali-worker pod pod-secrets-1ed0f232-bd1c-4653-9e53-e0d2803dbf63 container secret-volume-test: STEP: delete the pod May 8 10:51:22.256: INFO: Waiting for pod pod-secrets-1ed0f232-bd1c-4653-9e53-e0d2803dbf63 to disappear May 8 10:51:22.259: INFO: Pod pod-secrets-1ed0f232-bd1c-4653-9e53-e0d2803dbf63 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:51:22.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3152" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:51:22.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 10:51:23.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 10:51:25.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724531883, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724531883, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724531883, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724531882, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 10:51:28.310: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:51:40.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9963" for this suite. STEP: Destroying namespace "webhook-9963-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.708 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":4,"skipped":55,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:51:40.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 10:51:42.076: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 10:51:44.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724531902, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724531902, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724531902, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724531901, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 10:51:47.245: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:51:47.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9087" for this suite. STEP: Destroying namespace "webhook-9087-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.612 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":5,"skipped":67,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:51:47.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium May 8 10:51:47.646: INFO: Waiting up to 5m0s for pod "pod-bbbd6f57-fc12-4060-96e8-96cdb1240827" in namespace "emptydir-4396" to be "Succeeded or Failed" May 8 10:51:47.651: INFO: Pod "pod-bbbd6f57-fc12-4060-96e8-96cdb1240827": Phase="Pending", Reason="", readiness=false. Elapsed: 5.144065ms May 8 10:51:49.655: INFO: Pod "pod-bbbd6f57-fc12-4060-96e8-96cdb1240827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00986457s May 8 10:51:51.659: INFO: Pod "pod-bbbd6f57-fc12-4060-96e8-96cdb1240827": Phase="Running", Reason="", readiness=true. Elapsed: 4.013490454s May 8 10:51:53.664: INFO: Pod "pod-bbbd6f57-fc12-4060-96e8-96cdb1240827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018879071s STEP: Saw pod success May 8 10:51:53.665: INFO: Pod "pod-bbbd6f57-fc12-4060-96e8-96cdb1240827" satisfied condition "Succeeded or Failed" May 8 10:51:53.668: INFO: Trying to get logs from node kali-worker2 pod pod-bbbd6f57-fc12-4060-96e8-96cdb1240827 container test-container: STEP: delete the pod May 8 10:51:53.736: INFO: Waiting for pod pod-bbbd6f57-fc12-4060-96e8-96cdb1240827 to disappear May 8 10:51:53.745: INFO: Pod pod-bbbd6f57-fc12-4060-96e8-96cdb1240827 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:51:53.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4396" for this suite. • [SLOW TEST:6.161 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":72,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:51:53.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-51.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-51.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-51.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-51.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-51.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-51.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-51.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 10:51:59.938: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:51:59.943: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:51:59.946: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:51:59.949: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:51:59.958: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:51:59.960: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:51:59.962: INFO: Unable to read jessie_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:51:59.965: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:51:59.970: INFO: Lookups using dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local] May 8 10:52:04.976: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:04.980: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:04.983: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:04.987: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:04.996: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:04.999: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:05.002: INFO: Unable to read jessie_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:05.005: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:05.010: INFO: Lookups using dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local] May 8 10:52:09.975: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:09.978: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:09.980: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:09.983: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:09.996: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:09.999: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:10.002: INFO: Unable to read jessie_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:10.005: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:10.011: INFO: Lookups using dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local] May 8 10:52:14.977: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:14.981: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:14.984: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:14.986: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:14.992: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:14.995: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:14.997: INFO: Unable to read jessie_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:14.999: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:15.004: INFO: Lookups using dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local] May 8 10:52:19.975: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:19.979: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:19.982: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:19.985: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:19.993: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:19.996: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:19.999: INFO: Unable to read jessie_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:20.002: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:20.008: INFO: Lookups using dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local] May 8 10:52:24.975: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:24.980: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:24.984: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:24.986: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:24.994: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:24.996: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:24.999: INFO: Unable to read jessie_udp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:25.002: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local from pod dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700: the server could not find the requested resource (get pods dns-test-dde4cecb-fb2b-472d-818f-556594a9f700) May 8 10:52:25.007: INFO: Lookups using dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local wheezy_udp@dns-test-service-2.dns-51.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-51.svc.cluster.local jessie_udp@dns-test-service-2.dns-51.svc.cluster.local jessie_tcp@dns-test-service-2.dns-51.svc.cluster.local] May 8 10:52:30.051: INFO: DNS probes using dns-51/dns-test-dde4cecb-fb2b-472d-818f-556594a9f700 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:52:30.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-51" for this suite. • [SLOW TEST:37.197 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":7,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:52:30.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 8 10:52:31.109: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:52:40.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9144" for this suite. • [SLOW TEST:9.468 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":8,"skipped":101,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:52:40.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 8 10:52:47.502: INFO: Successfully updated pod "annotationupdate5de46955-b3f3-4c75-97d8-5d42dd49802c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:52:49.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8921" for this suite. • [SLOW TEST:9.173 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":113,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:52:49.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all May 8 10:52:49.694: INFO: Waiting up to 5m0s for pod "client-containers-98aee053-a02c-45ef-a44e-9e977ab2a14d" in namespace "containers-8941" to be "Succeeded or Failed" May 8 10:52:49.709: INFO: Pod "client-containers-98aee053-a02c-45ef-a44e-9e977ab2a14d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.308822ms May 8 10:52:51.713: INFO: Pod "client-containers-98aee053-a02c-45ef-a44e-9e977ab2a14d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019068594s May 8 10:52:53.717: INFO: Pod "client-containers-98aee053-a02c-45ef-a44e-9e977ab2a14d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023208417s STEP: Saw pod success May 8 10:52:53.717: INFO: Pod "client-containers-98aee053-a02c-45ef-a44e-9e977ab2a14d" satisfied condition "Succeeded or Failed" May 8 10:52:53.720: INFO: Trying to get logs from node kali-worker pod client-containers-98aee053-a02c-45ef-a44e-9e977ab2a14d container test-container: STEP: delete the pod May 8 10:52:53.909: INFO: Waiting for pod client-containers-98aee053-a02c-45ef-a44e-9e977ab2a14d to disappear May 8 10:52:53.967: INFO: Pod client-containers-98aee053-a02c-45ef-a44e-9e977ab2a14d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:52:53.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8941" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:52:53.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 8 10:52:54.089: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:52:59.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2488" for this suite. • [SLOW TEST:5.813 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":11,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:52:59.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:53:11.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1788" for this suite. • [SLOW TEST:11.478 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":12,"skipped":202,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:53:11.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-6b33485f-5c1d-4126-9125-51511f121c60 in namespace container-probe-2698 May 8 10:53:17.663: INFO: Started pod liveness-6b33485f-5c1d-4126-9125-51511f121c60 in namespace container-probe-2698 STEP: checking the pod's current state and verifying that restartCount is present May 8 10:53:17.666: INFO: Initial restart count of pod liveness-6b33485f-5c1d-4126-9125-51511f121c60 is 0 May 8 10:53:35.709: INFO: Restart count of pod container-probe-2698/liveness-6b33485f-5c1d-4126-9125-51511f121c60 is now 1 (18.042718479s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:53:35.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2698" for this suite. • [SLOW TEST:24.514 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":217,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:53:35.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 8 10:53:35.846: INFO: Waiting up to 5m0s for pod "downward-api-777ecea8-2106-4141-806f-bafaac709151" in namespace "downward-api-5090" to be "Succeeded or Failed" May 8 10:53:36.155: INFO: Pod "downward-api-777ecea8-2106-4141-806f-bafaac709151": Phase="Pending", Reason="", readiness=false. Elapsed: 308.377375ms May 8 10:53:38.159: INFO: Pod "downward-api-777ecea8-2106-4141-806f-bafaac709151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312157599s May 8 10:53:40.163: INFO: Pod "downward-api-777ecea8-2106-4141-806f-bafaac709151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316664707s STEP: Saw pod success May 8 10:53:40.163: INFO: Pod "downward-api-777ecea8-2106-4141-806f-bafaac709151" satisfied condition "Succeeded or Failed" May 8 10:53:40.166: INFO: Trying to get logs from node kali-worker pod downward-api-777ecea8-2106-4141-806f-bafaac709151 container dapi-container: STEP: delete the pod May 8 10:53:40.238: INFO: Waiting for pod downward-api-777ecea8-2106-4141-806f-bafaac709151 to disappear May 8 10:53:40.241: INFO: Pod downward-api-777ecea8-2106-4141-806f-bafaac709151 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:53:40.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5090" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:53:40.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-5adb6208-f204-4be1-9ac5-bbbfcdf6ba15 STEP: Creating a pod to test consume configMaps May 8 10:53:40.458: INFO: Waiting up to 5m0s for pod "pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373" in namespace "configmap-3765" to be "Succeeded or Failed" May 8 10:53:40.497: INFO: Pod "pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373": Phase="Pending", Reason="", readiness=false. Elapsed: 39.286814ms May 8 10:53:42.514: INFO: Pod "pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055489628s May 8 10:53:44.518: INFO: Pod "pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059657721s May 8 10:53:46.522: INFO: Pod "pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063850657s STEP: Saw pod success May 8 10:53:46.522: INFO: Pod "pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373" satisfied condition "Succeeded or Failed" May 8 10:53:46.525: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373 container configmap-volume-test: STEP: delete the pod May 8 10:53:46.552: INFO: Waiting for pod pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373 to disappear May 8 10:53:46.556: INFO: Pod pod-configmaps-48335b81-9ca3-48f5-a41e-8bfa6f0bd373 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:53:46.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3765" for this suite. • [SLOW TEST:6.315 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":270,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:53:46.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium May 8 10:53:46.644: INFO: Waiting up to 5m0s for pod "pod-3e5bc59d-ae43-4235-9322-59fcec09f59c" in namespace "emptydir-7961" to be "Succeeded or Failed" May 8 10:53:46.663: INFO: Pod "pod-3e5bc59d-ae43-4235-9322-59fcec09f59c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.858915ms May 8 10:53:48.667: INFO: Pod "pod-3e5bc59d-ae43-4235-9322-59fcec09f59c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022725894s May 8 10:53:50.672: INFO: Pod "pod-3e5bc59d-ae43-4235-9322-59fcec09f59c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027132791s May 8 10:53:52.674: INFO: Pod "pod-3e5bc59d-ae43-4235-9322-59fcec09f59c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029596684s STEP: Saw pod success May 8 10:53:52.674: INFO: Pod "pod-3e5bc59d-ae43-4235-9322-59fcec09f59c" satisfied condition "Succeeded or Failed" May 8 10:53:52.676: INFO: Trying to get logs from node kali-worker pod pod-3e5bc59d-ae43-4235-9322-59fcec09f59c container test-container: STEP: delete the pod May 8 10:53:52.939: INFO: Waiting for pod pod-3e5bc59d-ae43-4235-9322-59fcec09f59c to disappear May 8 10:53:52.988: INFO: Pod pod-3e5bc59d-ae43-4235-9322-59fcec09f59c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:53:52.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7961" for this suite. • [SLOW TEST:6.432 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":281,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:53:52.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 10:53:53.571: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 10:53:55.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532033, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532033, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532033, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532033, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 10:53:57.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532033, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532033, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532033, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532033, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 10:54:00.615: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:54:00.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3316" for this suite. STEP: Destroying namespace "webhook-3316-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.840 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":17,"skipped":288,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:54:00.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 8 10:54:00.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-daebc1c5-805f-483b-a6c1-a6a82da94323" in namespace "downward-api-7262" to be "Succeeded or Failed" May 8 10:54:00.927: INFO: Pod "downwardapi-volume-daebc1c5-805f-483b-a6c1-a6a82da94323": Phase="Pending", Reason="", readiness=false. Elapsed: 28.213359ms May 8 10:54:02.931: INFO: Pod "downwardapi-volume-daebc1c5-805f-483b-a6c1-a6a82da94323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032059836s May 8 10:54:04.935: INFO: Pod "downwardapi-volume-daebc1c5-805f-483b-a6c1-a6a82da94323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036202561s STEP: Saw pod success May 8 10:54:04.935: INFO: Pod "downwardapi-volume-daebc1c5-805f-483b-a6c1-a6a82da94323" satisfied condition "Succeeded or Failed" May 8 10:54:04.938: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-daebc1c5-805f-483b-a6c1-a6a82da94323 container client-container: STEP: delete the pod May 8 10:54:04.954: INFO: Waiting for pod downwardapi-volume-daebc1c5-805f-483b-a6c1-a6a82da94323 to disappear May 8 10:54:04.982: INFO: Pod downwardapi-volume-daebc1c5-805f-483b-a6c1-a6a82da94323 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:54:04.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7262" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":289,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:54:04.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9176 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 10:54:05.273: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 8 10:54:05.446: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 8 10:54:07.449: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 8 10:54:09.451: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 8 10:54:11.450: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 10:54:13.449: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 10:54:15.454: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 10:54:17.450: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 10:54:19.450: INFO: The status of Pod netserver-0 is Running (Ready = true) May 8 10:54:19.456: INFO: The status of Pod netserver-1 is Running (Ready = false) May 8 10:54:21.461: INFO: The status of Pod netserver-1 is Running (Ready = false) May 8 10:54:23.460: INFO: The status of Pod netserver-1 is Running (Ready = false) May 8 10:54:25.466: INFO: The status of Pod netserver-1 is Running (Ready = false) May 8 10:54:27.460: INFO: The status of Pod netserver-1 is Running (Ready = false) May 8 10:54:29.460: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 8 10:54:35.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.116:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9176 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:54:35.519: INFO: >>> kubeConfig: /root/.kube/config I0508 10:54:35.549948 7 log.go:172] (0xc002c856b0) (0xc0017ddc20) Create stream I0508 10:54:35.549979 7 log.go:172] (0xc002c856b0) (0xc0017ddc20) Stream added, broadcasting: 1 I0508 10:54:35.552983 7 log.go:172] (0xc002c856b0) Reply frame received for 1 I0508 10:54:35.553013 7 log.go:172] (0xc002c856b0) (0xc0017ddcc0) Create stream I0508 10:54:35.553027 7 log.go:172] (0xc002c856b0) (0xc0017ddcc0) Stream added, broadcasting: 3 I0508 10:54:35.554212 7 log.go:172] (0xc002c856b0) Reply frame received for 3 I0508 10:54:35.554252 7 log.go:172] (0xc002c856b0) (0xc001b5a140) Create stream I0508 10:54:35.554267 7 log.go:172] (0xc002c856b0) (0xc001b5a140) Stream added, broadcasting: 5 I0508 10:54:35.555199 7 log.go:172] (0xc002c856b0) Reply frame received for 5 I0508 10:54:35.629714 7 log.go:172] (0xc002c856b0) Data frame received for 3 I0508 10:54:35.629746 7 log.go:172] (0xc0017ddcc0) (3) Data frame handling I0508 10:54:35.629766 7 log.go:172] (0xc0017ddcc0) (3) Data frame sent I0508 10:54:35.629986 7 log.go:172] (0xc002c856b0) Data frame received for 3 I0508 10:54:35.630023 7 log.go:172] (0xc0017ddcc0) (3) Data frame handling I0508 10:54:35.630288 7 log.go:172] (0xc002c856b0) Data frame received for 5 I0508 10:54:35.630324 7 log.go:172] (0xc001b5a140) (5) Data frame handling I0508 10:54:35.632003 7 log.go:172] (0xc002c856b0) Data frame received for 1 I0508 10:54:35.632024 7 log.go:172] (0xc0017ddc20) (1) Data frame handling I0508 10:54:35.632038 7 log.go:172] (0xc0017ddc20) (1) Data frame sent I0508 10:54:35.632053 7 log.go:172] (0xc002c856b0) (0xc0017ddc20) Stream removed, broadcasting: 1 I0508 10:54:35.632077 7 log.go:172] (0xc002c856b0) Go away received I0508 10:54:35.632440 7 log.go:172] (0xc002c856b0) (0xc0017ddc20) Stream removed, broadcasting: 1 I0508 10:54:35.632460 7 log.go:172] (0xc002c856b0) (0xc0017ddcc0) Stream removed, broadcasting: 3 I0508 10:54:35.632468 7 log.go:172] (0xc002c856b0) (0xc001b5a140) Stream removed, broadcasting: 5 May 8 10:54:35.632: INFO: Found all expected endpoints: [netserver-0] May 8 10:54:35.635: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.165:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9176 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:54:35.635: INFO: >>> kubeConfig: /root/.kube/config I0508 10:54:35.662932 7 log.go:172] (0xc002f4a9a0) (0xc001bcc460) Create stream I0508 10:54:35.662956 7 log.go:172] (0xc002f4a9a0) (0xc001bcc460) Stream added, broadcasting: 1 I0508 10:54:35.664778 7 log.go:172] (0xc002f4a9a0) Reply frame received for 1 I0508 10:54:35.664804 7 log.go:172] (0xc002f4a9a0) (0xc0017ddd60) Create stream I0508 10:54:35.664817 7 log.go:172] (0xc002f4a9a0) (0xc0017ddd60) Stream added, broadcasting: 3 I0508 10:54:35.665885 7 log.go:172] (0xc002f4a9a0) Reply frame received for 3 I0508 10:54:35.665943 7 log.go:172] (0xc002f4a9a0) (0xc001b5a320) Create stream I0508 10:54:35.665971 7 log.go:172] (0xc002f4a9a0) (0xc001b5a320) Stream added, broadcasting: 5 I0508 10:54:35.666718 7 log.go:172] (0xc002f4a9a0) Reply frame received for 5 I0508 10:54:35.732084 7 log.go:172] (0xc002f4a9a0) Data frame received for 3 I0508 10:54:35.732120 7 log.go:172] (0xc0017ddd60) (3) Data frame handling I0508 10:54:35.732142 7 log.go:172] (0xc0017ddd60) (3) Data frame sent I0508 10:54:35.732153 7 log.go:172] (0xc002f4a9a0) Data frame received for 3 I0508 10:54:35.732163 7 log.go:172] (0xc0017ddd60) (3) Data frame handling I0508 10:54:35.732236 7 log.go:172] (0xc002f4a9a0) Data frame received for 5 I0508 10:54:35.732255 7 log.go:172] (0xc001b5a320) (5) Data frame handling I0508 10:54:35.733750 7 log.go:172] (0xc002f4a9a0) Data frame received for 1 I0508 10:54:35.733808 7 log.go:172] (0xc001bcc460) (1) Data frame handling I0508 10:54:35.733829 7 log.go:172] (0xc001bcc460) (1) Data frame sent I0508 10:54:35.733842 7 log.go:172] (0xc002f4a9a0) (0xc001bcc460) Stream removed, broadcasting: 1 I0508 10:54:35.733863 7 log.go:172] (0xc002f4a9a0) Go away received I0508 10:54:35.733988 7 log.go:172] (0xc002f4a9a0) (0xc001bcc460) Stream removed, broadcasting: 1 I0508 10:54:35.734026 7 log.go:172] (0xc002f4a9a0) (0xc0017ddd60) Stream removed, broadcasting: 3 I0508 10:54:35.734053 7 log.go:172] (0xc002f4a9a0) (0xc001b5a320) Stream removed, broadcasting: 5 May 8 10:54:35.734: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:54:35.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9176" for this suite. • [SLOW TEST:30.752 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":295,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:54:35.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 10:54:36.315: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 10:54:38.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532076, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532076, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532076, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532076, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 10:54:41.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 8 10:54:45.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-2090 to-be-attached-pod -i -c=container1' May 8 10:54:48.955: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:54:48.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2090" for this suite. STEP: Destroying namespace "webhook-2090-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.461 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":20,"skipped":305,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:54:49.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container May 8 10:54:54.231: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4744 pod-service-account-33f5abe3-f6ff-43b4-84fc-aa6b44508bff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 8 10:54:54.434: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4744 pod-service-account-33f5abe3-f6ff-43b4-84fc-aa6b44508bff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 8 10:54:54.698: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4744 pod-service-account-33f5abe3-f6ff-43b4-84fc-aa6b44508bff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:54:55.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4744" for this suite. • [SLOW TEST:5.886 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":21,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:54:55.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-4c6c19f2-1bf8-4112-a914-b8a94b659ad9 STEP: Creating a pod to test consume configMaps May 8 10:54:55.377: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53e57e1e-82c1-486f-908c-53a2ef8c9769" in namespace "projected-3934" to be "Succeeded or Failed" May 8 10:54:55.545: INFO: Pod "pod-projected-configmaps-53e57e1e-82c1-486f-908c-53a2ef8c9769": Phase="Pending", Reason="", readiness=false. Elapsed: 167.747211ms May 8 10:54:57.549: INFO: Pod "pod-projected-configmaps-53e57e1e-82c1-486f-908c-53a2ef8c9769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171123728s May 8 10:54:59.553: INFO: Pod "pod-projected-configmaps-53e57e1e-82c1-486f-908c-53a2ef8c9769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175427296s STEP: Saw pod success May 8 10:54:59.553: INFO: Pod "pod-projected-configmaps-53e57e1e-82c1-486f-908c-53a2ef8c9769" satisfied condition "Succeeded or Failed" May 8 10:54:59.555: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-53e57e1e-82c1-486f-908c-53a2ef8c9769 container projected-configmap-volume-test: STEP: delete the pod May 8 10:54:59.585: INFO: Waiting for pod pod-projected-configmaps-53e57e1e-82c1-486f-908c-53a2ef8c9769 to disappear May 8 10:54:59.589: INFO: Pod pod-projected-configmaps-53e57e1e-82c1-486f-908c-53a2ef8c9769 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:54:59.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3934" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":331,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:54:59.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command May 8 10:54:59.651: INFO: Waiting up to 5m0s for pod "client-containers-19521740-c087-4be2-b3ca-366a7339023e" in namespace "containers-5658" to be "Succeeded or Failed" May 8 10:54:59.694: INFO: Pod "client-containers-19521740-c087-4be2-b3ca-366a7339023e": Phase="Pending", Reason="", readiness=false. Elapsed: 42.739044ms May 8 10:55:01.698: INFO: Pod "client-containers-19521740-c087-4be2-b3ca-366a7339023e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047172707s May 8 10:55:03.934: INFO: Pod "client-containers-19521740-c087-4be2-b3ca-366a7339023e": Phase="Running", Reason="", readiness=true. Elapsed: 4.282559171s May 8 10:55:05.938: INFO: Pod "client-containers-19521740-c087-4be2-b3ca-366a7339023e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.286911563s STEP: Saw pod success May 8 10:55:05.938: INFO: Pod "client-containers-19521740-c087-4be2-b3ca-366a7339023e" satisfied condition "Succeeded or Failed" May 8 10:55:05.941: INFO: Trying to get logs from node kali-worker2 pod client-containers-19521740-c087-4be2-b3ca-366a7339023e container test-container: STEP: delete the pod May 8 10:55:06.031: INFO: Waiting for pod client-containers-19521740-c087-4be2-b3ca-366a7339023e to disappear May 8 10:55:06.040: INFO: Pod client-containers-19521740-c087-4be2-b3ca-366a7339023e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:55:06.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5658" for this suite. • [SLOW TEST:6.450 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:55:06.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-6dac0744-b332-4c80-862d-9047d2ac1fc4 STEP: Creating a pod to test consume secrets May 8 10:55:06.505: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b" in namespace "projected-8562" to be "Succeeded or Failed" May 8 10:55:06.574: INFO: Pod "pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b": Phase="Pending", Reason="", readiness=false. Elapsed: 69.220143ms May 8 10:55:08.578: INFO: Pod "pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072911871s May 8 10:55:10.582: INFO: Pod "pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b": Phase="Running", Reason="", readiness=true. Elapsed: 4.07698661s May 8 10:55:12.586: INFO: Pod "pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081221698s STEP: Saw pod success May 8 10:55:12.587: INFO: Pod "pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b" satisfied condition "Succeeded or Failed" May 8 10:55:12.589: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b container projected-secret-volume-test: STEP: delete the pod May 8 10:55:12.616: INFO: Waiting for pod pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b to disappear May 8 10:55:12.640: INFO: Pod pod-projected-secrets-476112fb-ba24-4975-b31d-d9d46563730b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:55:12.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8562" for this suite. • [SLOW TEST:6.602 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":378,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:55:12.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-m9r9 STEP: Creating a pod to test atomic-volume-subpath May 8 10:55:12.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-m9r9" in namespace "subpath-8490" to be "Succeeded or Failed" May 8 10:55:12.856: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.837905ms May 8 10:55:14.860: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062796705s May 8 10:55:16.910: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 4.112697234s May 8 10:55:18.914: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 6.116597377s May 8 10:55:20.918: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 8.121278594s May 8 10:55:22.923: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 10.125793162s May 8 10:55:24.926: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 12.129309839s May 8 10:55:26.931: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 14.133968229s May 8 10:55:28.935: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 16.138126623s May 8 10:55:30.939: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 18.142494688s May 8 10:55:32.944: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 20.146992826s May 8 10:55:34.948: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Running", Reason="", readiness=true. Elapsed: 22.150592429s May 8 10:55:37.011: INFO: Pod "pod-subpath-test-secret-m9r9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.214378068s STEP: Saw pod success May 8 10:55:37.011: INFO: Pod "pod-subpath-test-secret-m9r9" satisfied condition "Succeeded or Failed" May 8 10:55:37.015: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-m9r9 container test-container-subpath-secret-m9r9: STEP: delete the pod May 8 10:55:37.184: INFO: Waiting for pod pod-subpath-test-secret-m9r9 to disappear May 8 10:55:37.197: INFO: Pod pod-subpath-test-secret-m9r9 no longer exists STEP: Deleting pod pod-subpath-test-secret-m9r9 May 8 10:55:37.197: INFO: Deleting pod "pod-subpath-test-secret-m9r9" in namespace "subpath-8490" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:55:37.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8490" for this suite. • [SLOW TEST:24.563 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":25,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:55:37.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-36810678-b5d0-4ae8-9f29-86f11b63b918 STEP: Creating a pod to test consume configMaps May 8 10:55:37.343: INFO: Waiting up to 5m0s for pod "pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f" in namespace "configmap-92" to be "Succeeded or Failed" May 8 10:55:37.425: INFO: Pod "pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f": Phase="Pending", Reason="", readiness=false. Elapsed: 81.502662ms May 8 10:55:39.428: INFO: Pod "pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085015251s May 8 10:55:41.433: INFO: Pod "pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f": Phase="Running", Reason="", readiness=true. Elapsed: 4.089826943s May 8 10:55:43.438: INFO: Pod "pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09429932s STEP: Saw pod success May 8 10:55:43.438: INFO: Pod "pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f" satisfied condition "Succeeded or Failed" May 8 10:55:43.440: INFO: Trying to get logs from node kali-worker pod pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f container configmap-volume-test: STEP: delete the pod May 8 10:55:43.480: INFO: Waiting for pod pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f to disappear May 8 10:55:43.490: INFO: Pod pod-configmaps-0eaf39ab-10fe-4f60-b146-ca97ecbfab7f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:55:43.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-92" for this suite. • [SLOW TEST:6.350 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":417,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:55:43.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-203 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-203 STEP: Creating statefulset with conflicting port in namespace statefulset-203 STEP: Waiting until pod test-pod will start running in namespace statefulset-203 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-203 May 8 10:55:49.775: INFO: Observed stateful pod in namespace: statefulset-203, name: ss-0, uid: fdb0f575-8cf8-4ffd-967a-b2d4eb452675, status phase: Failed. Waiting for statefulset controller to delete. May 8 10:55:49.799: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-203 STEP: Removing pod with conflicting port in namespace statefulset-203 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-203 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 8 10:55:55.947: INFO: Deleting all statefulset in ns statefulset-203 May 8 10:55:55.950: INFO: Scaling statefulset ss to 0 May 8 10:56:05.986: INFO: Waiting for statefulset status.replicas updated to 0 May 8 10:56:05.990: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:56:06.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-203" for this suite. • [SLOW TEST:22.446 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":27,"skipped":423,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:56:06.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 8 10:56:06.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3f065f8-7d24-4d19-ae0f-0a159851025e" in namespace "downward-api-3432" to be "Succeeded or Failed" May 8 10:56:06.114: INFO: Pod "downwardapi-volume-b3f065f8-7d24-4d19-ae0f-0a159851025e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.945872ms May 8 10:56:08.176: INFO: Pod "downwardapi-volume-b3f065f8-7d24-4d19-ae0f-0a159851025e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072163668s May 8 10:56:10.239: INFO: Pod "downwardapi-volume-b3f065f8-7d24-4d19-ae0f-0a159851025e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135224774s STEP: Saw pod success May 8 10:56:10.239: INFO: Pod "downwardapi-volume-b3f065f8-7d24-4d19-ae0f-0a159851025e" satisfied condition "Succeeded or Failed" May 8 10:56:10.243: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b3f065f8-7d24-4d19-ae0f-0a159851025e container client-container: STEP: delete the pod May 8 10:56:10.277: INFO: Waiting for pod downwardapi-volume-b3f065f8-7d24-4d19-ae0f-0a159851025e to disappear May 8 10:56:10.302: INFO: Pod downwardapi-volume-b3f065f8-7d24-4d19-ae0f-0a159851025e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:56:10.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3432" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:56:10.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 8 10:56:11.040: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c44a85d-f170-48c7-bb55-e4b052ad0e78" in namespace "projected-6560" to be "Succeeded or Failed" May 8 10:56:11.138: INFO: Pod "downwardapi-volume-6c44a85d-f170-48c7-bb55-e4b052ad0e78": Phase="Pending", Reason="", readiness=false. Elapsed: 98.101063ms May 8 10:56:13.162: INFO: Pod "downwardapi-volume-6c44a85d-f170-48c7-bb55-e4b052ad0e78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121899042s May 8 10:56:15.166: INFO: Pod "downwardapi-volume-6c44a85d-f170-48c7-bb55-e4b052ad0e78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126168752s STEP: Saw pod success May 8 10:56:15.166: INFO: Pod "downwardapi-volume-6c44a85d-f170-48c7-bb55-e4b052ad0e78" satisfied condition "Succeeded or Failed" May 8 10:56:15.168: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-6c44a85d-f170-48c7-bb55-e4b052ad0e78 container client-container: STEP: delete the pod May 8 10:56:15.208: INFO: Waiting for pod downwardapi-volume-6c44a85d-f170-48c7-bb55-e4b052ad0e78 to disappear May 8 10:56:15.212: INFO: Pod downwardapi-volume-6c44a85d-f170-48c7-bb55-e4b052ad0e78 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:56:15.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6560" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":453,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:56:15.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 10:56:15.815: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 10:56:17.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532175, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532175, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532175, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532175, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 10:56:20.884: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 10:56:20.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3508-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:56:22.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3892" for this suite. STEP: Destroying namespace "webhook-3892-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.018 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":30,"skipped":459,"failed":0} SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:56:22.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 8 10:56:32.394: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:32.394: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:32.432492 7 log.go:172] (0xc002f4b4a0) (0xc002aaad20) Create stream I0508 10:56:32.432517 7 log.go:172] (0xc002f4b4a0) (0xc002aaad20) Stream added, broadcasting: 1 I0508 10:56:32.434306 7 log.go:172] (0xc002f4b4a0) Reply frame received for 1 I0508 10:56:32.434345 7 log.go:172] (0xc002f4b4a0) (0xc002bb8140) Create stream I0508 10:56:32.434367 7 log.go:172] (0xc002f4b4a0) (0xc002bb8140) Stream added, broadcasting: 3 I0508 10:56:32.435387 7 log.go:172] (0xc002f4b4a0) Reply frame received for 3 I0508 10:56:32.435439 7 log.go:172] (0xc002f4b4a0) (0xc001bcd9a0) Create stream I0508 10:56:32.435456 7 log.go:172] (0xc002f4b4a0) (0xc001bcd9a0) Stream added, broadcasting: 5 I0508 10:56:32.436432 7 log.go:172] (0xc002f4b4a0) Reply frame received for 5 I0508 10:56:32.520880 7 log.go:172] (0xc002f4b4a0) Data frame received for 5 I0508 10:56:32.520933 7 log.go:172] (0xc001bcd9a0) (5) Data frame handling I0508 10:56:32.520964 7 log.go:172] (0xc002f4b4a0) Data frame received for 3 I0508 10:56:32.520977 7 log.go:172] (0xc002bb8140) (3) Data frame handling I0508 10:56:32.520994 7 log.go:172] (0xc002bb8140) (3) Data frame sent I0508 10:56:32.521010 7 log.go:172] (0xc002f4b4a0) Data frame received for 3 I0508 10:56:32.521035 7 log.go:172] (0xc002bb8140) (3) Data frame handling I0508 10:56:32.523105 7 log.go:172] (0xc002f4b4a0) Data frame received for 1 I0508 10:56:32.523156 7 log.go:172] (0xc002aaad20) (1) Data frame handling I0508 10:56:32.523191 7 log.go:172] (0xc002aaad20) (1) Data frame sent I0508 10:56:32.523222 7 log.go:172] (0xc002f4b4a0) (0xc002aaad20) Stream removed, broadcasting: 1 I0508 10:56:32.523256 7 log.go:172] (0xc002f4b4a0) Go away received I0508 10:56:32.523360 7 log.go:172] (0xc002f4b4a0) (0xc002aaad20) Stream removed, broadcasting: 1 I0508 10:56:32.523380 7 log.go:172] (0xc002f4b4a0) (0xc002bb8140) Stream removed, broadcasting: 3 I0508 10:56:32.523397 7 log.go:172] (0xc002f4b4a0) (0xc001bcd9a0) Stream removed, broadcasting: 5 May 8 10:56:32.523: INFO: Exec stderr: "" May 8 10:56:32.523: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:32.523: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:32.559315 7 log.go:172] (0xc0028fa4d0) (0xc002bb83c0) Create stream I0508 10:56:32.559344 7 log.go:172] (0xc0028fa4d0) (0xc002bb83c0) Stream added, broadcasting: 1 I0508 10:56:32.561633 7 log.go:172] (0xc0028fa4d0) Reply frame received for 1 I0508 10:56:32.561676 7 log.go:172] (0xc0028fa4d0) (0xc002aaadc0) Create stream I0508 10:56:32.561692 7 log.go:172] (0xc0028fa4d0) (0xc002aaadc0) Stream added, broadcasting: 3 I0508 10:56:32.562708 7 log.go:172] (0xc0028fa4d0) Reply frame received for 3 I0508 10:56:32.562747 7 log.go:172] (0xc0028fa4d0) (0xc001bcda40) Create stream I0508 10:56:32.562762 7 log.go:172] (0xc0028fa4d0) (0xc001bcda40) Stream added, broadcasting: 5 I0508 10:56:32.563759 7 log.go:172] (0xc0028fa4d0) Reply frame received for 5 I0508 10:56:32.637322 7 log.go:172] (0xc0028fa4d0) Data frame received for 5 I0508 10:56:32.637353 7 log.go:172] (0xc001bcda40) (5) Data frame handling I0508 10:56:32.637374 7 log.go:172] (0xc0028fa4d0) Data frame received for 3 I0508 10:56:32.637385 7 log.go:172] (0xc002aaadc0) (3) Data frame handling I0508 10:56:32.637395 7 log.go:172] (0xc002aaadc0) (3) Data frame sent I0508 10:56:32.637412 7 log.go:172] (0xc0028fa4d0) Data frame received for 3 I0508 10:56:32.637420 7 log.go:172] (0xc002aaadc0) (3) Data frame handling I0508 10:56:32.639252 7 log.go:172] (0xc0028fa4d0) Data frame received for 1 I0508 10:56:32.639289 7 log.go:172] (0xc002bb83c0) (1) Data frame handling I0508 10:56:32.639339 7 log.go:172] (0xc002bb83c0) (1) Data frame sent I0508 10:56:32.639387 7 log.go:172] (0xc0028fa4d0) (0xc002bb83c0) Stream removed, broadcasting: 1 I0508 10:56:32.639421 7 log.go:172] (0xc0028fa4d0) Go away received I0508 10:56:32.639501 7 log.go:172] (0xc0028fa4d0) (0xc002bb83c0) Stream removed, broadcasting: 1 I0508 10:56:32.639533 7 log.go:172] (0xc0028fa4d0) (0xc002aaadc0) Stream removed, broadcasting: 3 I0508 10:56:32.639549 7 log.go:172] (0xc0028fa4d0) (0xc001bcda40) Stream removed, broadcasting: 5 May 8 10:56:32.639: INFO: Exec stderr: "" May 8 10:56:32.639: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:32.639: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:32.673714 7 log.go:172] (0xc002d829a0) (0xc00207ce60) Create stream I0508 10:56:32.673741 7 log.go:172] (0xc002d829a0) (0xc00207ce60) Stream added, broadcasting: 1 I0508 10:56:32.676437 7 log.go:172] (0xc002d829a0) Reply frame received for 1 I0508 10:56:32.676464 7 log.go:172] (0xc002d829a0) (0xc002aaae60) Create stream I0508 10:56:32.676477 7 log.go:172] (0xc002d829a0) (0xc002aaae60) Stream added, broadcasting: 3 I0508 10:56:32.677951 7 log.go:172] (0xc002d829a0) Reply frame received for 3 I0508 10:56:32.678019 7 log.go:172] (0xc002d829a0) (0xc002aaaf00) Create stream I0508 10:56:32.678075 7 log.go:172] (0xc002d829a0) (0xc002aaaf00) Stream added, broadcasting: 5 I0508 10:56:32.679216 7 log.go:172] (0xc002d829a0) Reply frame received for 5 I0508 10:56:32.746313 7 log.go:172] (0xc002d829a0) Data frame received for 5 I0508 10:56:32.746335 7 log.go:172] (0xc002aaaf00) (5) Data frame handling I0508 10:56:32.746376 7 log.go:172] (0xc002d829a0) Data frame received for 3 I0508 10:56:32.746424 7 log.go:172] (0xc002aaae60) (3) Data frame handling I0508 10:56:32.746451 7 log.go:172] (0xc002aaae60) (3) Data frame sent I0508 10:56:32.746466 7 log.go:172] (0xc002d829a0) Data frame received for 3 I0508 10:56:32.746478 7 log.go:172] (0xc002aaae60) (3) Data frame handling I0508 10:56:32.747782 7 log.go:172] (0xc002d829a0) Data frame received for 1 I0508 10:56:32.747802 7 log.go:172] (0xc00207ce60) (1) Data frame handling I0508 10:56:32.747810 7 log.go:172] (0xc00207ce60) (1) Data frame sent I0508 10:56:32.747945 7 log.go:172] (0xc002d829a0) (0xc00207ce60) Stream removed, broadcasting: 1 I0508 10:56:32.748040 7 log.go:172] (0xc002d829a0) Go away received I0508 10:56:32.748085 7 log.go:172] (0xc002d829a0) (0xc00207ce60) Stream removed, broadcasting: 1 I0508 10:56:32.748132 7 log.go:172] (0xc002d829a0) (0xc002aaae60) Stream removed, broadcasting: 3 I0508 10:56:32.748150 7 log.go:172] (0xc002d829a0) (0xc002aaaf00) Stream removed, broadcasting: 5 May 8 10:56:32.748: INFO: Exec stderr: "" May 8 10:56:32.748: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:32.748: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:32.776231 7 log.go:172] (0xc002c85b80) (0xc001fd4aa0) Create stream I0508 10:56:32.776267 7 log.go:172] (0xc002c85b80) (0xc001fd4aa0) Stream added, broadcasting: 1 I0508 10:56:32.779319 7 log.go:172] (0xc002c85b80) Reply frame received for 1 I0508 10:56:32.779360 7 log.go:172] (0xc002c85b80) (0xc001bcdb80) Create stream I0508 10:56:32.779376 7 log.go:172] (0xc002c85b80) (0xc001bcdb80) Stream added, broadcasting: 3 I0508 10:56:32.780134 7 log.go:172] (0xc002c85b80) Reply frame received for 3 I0508 10:56:32.780171 7 log.go:172] (0xc002c85b80) (0xc002aab0e0) Create stream I0508 10:56:32.780190 7 log.go:172] (0xc002c85b80) (0xc002aab0e0) Stream added, broadcasting: 5 I0508 10:56:32.780969 7 log.go:172] (0xc002c85b80) Reply frame received for 5 I0508 10:56:32.848281 7 log.go:172] (0xc002c85b80) Data frame received for 5 I0508 10:56:32.848327 7 log.go:172] (0xc002aab0e0) (5) Data frame handling I0508 10:56:32.848359 7 log.go:172] (0xc002c85b80) Data frame received for 3 I0508 10:56:32.848374 7 log.go:172] (0xc001bcdb80) (3) Data frame handling I0508 10:56:32.848400 7 log.go:172] (0xc001bcdb80) (3) Data frame sent I0508 10:56:32.848417 7 log.go:172] (0xc002c85b80) Data frame received for 3 I0508 10:56:32.848427 7 log.go:172] (0xc001bcdb80) (3) Data frame handling I0508 10:56:32.849933 7 log.go:172] (0xc002c85b80) Data frame received for 1 I0508 10:56:32.849963 7 log.go:172] (0xc001fd4aa0) (1) Data frame handling I0508 10:56:32.849985 7 log.go:172] (0xc001fd4aa0) (1) Data frame sent I0508 10:56:32.850019 7 log.go:172] (0xc002c85b80) (0xc001fd4aa0) Stream removed, broadcasting: 1 I0508 10:56:32.850044 7 log.go:172] (0xc002c85b80) Go away received I0508 10:56:32.850176 7 log.go:172] (0xc002c85b80) (0xc001fd4aa0) Stream removed, broadcasting: 1 I0508 10:56:32.850199 7 log.go:172] (0xc002c85b80) (0xc001bcdb80) Stream removed, broadcasting: 3 I0508 10:56:32.850211 7 log.go:172] (0xc002c85b80) (0xc002aab0e0) Stream removed, broadcasting: 5 May 8 10:56:32.850: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 8 10:56:32.850: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:32.850: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:32.881643 7 log.go:172] (0xc0028fab00) (0xc002bb85a0) Create stream I0508 10:56:32.881664 7 log.go:172] (0xc0028fab00) (0xc002bb85a0) Stream added, broadcasting: 1 I0508 10:56:32.884703 7 log.go:172] (0xc0028fab00) Reply frame received for 1 I0508 10:56:32.884757 7 log.go:172] (0xc0028fab00) (0xc00207cf00) Create stream I0508 10:56:32.884784 7 log.go:172] (0xc0028fab00) (0xc00207cf00) Stream added, broadcasting: 3 I0508 10:56:32.886107 7 log.go:172] (0xc0028fab00) Reply frame received for 3 I0508 10:56:32.886140 7 log.go:172] (0xc0028fab00) (0xc001fd4b40) Create stream I0508 10:56:32.886150 7 log.go:172] (0xc0028fab00) (0xc001fd4b40) Stream added, broadcasting: 5 I0508 10:56:32.887340 7 log.go:172] (0xc0028fab00) Reply frame received for 5 I0508 10:56:32.946425 7 log.go:172] (0xc0028fab00) Data frame received for 5 I0508 10:56:32.946488 7 log.go:172] (0xc001fd4b40) (5) Data frame handling I0508 10:56:32.946527 7 log.go:172] (0xc0028fab00) Data frame received for 3 I0508 10:56:32.946572 7 log.go:172] (0xc00207cf00) (3) Data frame handling I0508 10:56:32.946607 7 log.go:172] (0xc00207cf00) (3) Data frame sent I0508 10:56:32.946629 7 log.go:172] (0xc0028fab00) Data frame received for 3 I0508 10:56:32.946642 7 log.go:172] (0xc00207cf00) (3) Data frame handling I0508 10:56:32.948154 7 log.go:172] (0xc0028fab00) Data frame received for 1 I0508 10:56:32.948188 7 log.go:172] (0xc002bb85a0) (1) Data frame handling I0508 10:56:32.948208 7 log.go:172] (0xc002bb85a0) (1) Data frame sent I0508 10:56:32.948236 7 log.go:172] (0xc0028fab00) (0xc002bb85a0) Stream removed, broadcasting: 1 I0508 10:56:32.948269 7 log.go:172] (0xc0028fab00) Go away received I0508 10:56:32.948406 7 log.go:172] (0xc0028fab00) (0xc002bb85a0) Stream removed, broadcasting: 1 I0508 10:56:32.948432 7 log.go:172] (0xc0028fab00) (0xc00207cf00) Stream removed, broadcasting: 3 I0508 10:56:32.948453 7 log.go:172] (0xc0028fab00) (0xc001fd4b40) Stream removed, broadcasting: 5 May 8 10:56:32.948: INFO: Exec stderr: "" May 8 10:56:32.948: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:32.948: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:32.980692 7 log.go:172] (0xc002d82fd0) (0xc00207d180) Create stream I0508 10:56:32.980719 7 log.go:172] (0xc002d82fd0) (0xc00207d180) Stream added, broadcasting: 1 I0508 10:56:32.983984 7 log.go:172] (0xc002d82fd0) Reply frame received for 1 I0508 10:56:32.984010 7 log.go:172] (0xc002d82fd0) (0xc002aab180) Create stream I0508 10:56:32.984019 7 log.go:172] (0xc002d82fd0) (0xc002aab180) Stream added, broadcasting: 3 I0508 10:56:32.984931 7 log.go:172] (0xc002d82fd0) Reply frame received for 3 I0508 10:56:32.984972 7 log.go:172] (0xc002d82fd0) (0xc002bb8640) Create stream I0508 10:56:32.985001 7 log.go:172] (0xc002d82fd0) (0xc002bb8640) Stream added, broadcasting: 5 I0508 10:56:32.986050 7 log.go:172] (0xc002d82fd0) Reply frame received for 5 I0508 10:56:33.050029 7 log.go:172] (0xc002d82fd0) Data frame received for 5 I0508 10:56:33.050070 7 log.go:172] (0xc002bb8640) (5) Data frame handling I0508 10:56:33.050113 7 log.go:172] (0xc002d82fd0) Data frame received for 3 I0508 10:56:33.050151 7 log.go:172] (0xc002aab180) (3) Data frame handling I0508 10:56:33.050178 7 log.go:172] (0xc002aab180) (3) Data frame sent I0508 10:56:33.050194 7 log.go:172] (0xc002d82fd0) Data frame received for 3 I0508 10:56:33.050216 7 log.go:172] (0xc002aab180) (3) Data frame handling I0508 10:56:33.051271 7 log.go:172] (0xc002d82fd0) Data frame received for 1 I0508 10:56:33.051349 7 log.go:172] (0xc00207d180) (1) Data frame handling I0508 10:56:33.051397 7 log.go:172] (0xc00207d180) (1) Data frame sent I0508 10:56:33.051427 7 log.go:172] (0xc002d82fd0) (0xc00207d180) Stream removed, broadcasting: 1 I0508 10:56:33.051472 7 log.go:172] (0xc002d82fd0) Go away received I0508 10:56:33.051868 7 log.go:172] (0xc002d82fd0) (0xc00207d180) Stream removed, broadcasting: 1 I0508 10:56:33.051895 7 log.go:172] (0xc002d82fd0) (0xc002aab180) Stream removed, broadcasting: 3 I0508 10:56:33.051906 7 log.go:172] (0xc002d82fd0) (0xc002bb8640) Stream removed, broadcasting: 5 May 8 10:56:33.051: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 8 10:56:33.051: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:33.051: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:33.088481 7 log.go:172] (0xc002f4bb80) (0xc002aab360) Create stream I0508 10:56:33.088512 7 log.go:172] (0xc002f4bb80) (0xc002aab360) Stream added, broadcasting: 1 I0508 10:56:33.092915 7 log.go:172] (0xc002f4bb80) Reply frame received for 1 I0508 10:56:33.092967 7 log.go:172] (0xc002f4bb80) (0xc000fd8000) Create stream I0508 10:56:33.093004 7 log.go:172] (0xc002f4bb80) (0xc000fd8000) Stream added, broadcasting: 3 I0508 10:56:33.094377 7 log.go:172] (0xc002f4bb80) Reply frame received for 3 I0508 10:56:33.094441 7 log.go:172] (0xc002f4bb80) (0xc000e020a0) Create stream I0508 10:56:33.094485 7 log.go:172] (0xc002f4bb80) (0xc000e020a0) Stream added, broadcasting: 5 I0508 10:56:33.095659 7 log.go:172] (0xc002f4bb80) Reply frame received for 5 I0508 10:56:33.153985 7 log.go:172] (0xc002f4bb80) Data frame received for 3 I0508 10:56:33.154027 7 log.go:172] (0xc000fd8000) (3) Data frame handling I0508 10:56:33.154049 7 log.go:172] (0xc000fd8000) (3) Data frame sent I0508 10:56:33.154083 7 log.go:172] (0xc002f4bb80) Data frame received for 3 I0508 10:56:33.154100 7 log.go:172] (0xc000fd8000) (3) Data frame handling I0508 10:56:33.154137 7 log.go:172] (0xc002f4bb80) Data frame received for 5 I0508 10:56:33.154161 7 log.go:172] (0xc000e020a0) (5) Data frame handling I0508 10:56:33.155721 7 log.go:172] (0xc002f4bb80) Data frame received for 1 I0508 10:56:33.155754 7 log.go:172] (0xc002aab360) (1) Data frame handling I0508 10:56:33.155767 7 log.go:172] (0xc002aab360) (1) Data frame sent I0508 10:56:33.155787 7 log.go:172] (0xc002f4bb80) (0xc002aab360) Stream removed, broadcasting: 1 I0508 10:56:33.155888 7 log.go:172] (0xc002f4bb80) (0xc002aab360) Stream removed, broadcasting: 1 I0508 10:56:33.155926 7 log.go:172] (0xc002f4bb80) (0xc000fd8000) Stream removed, broadcasting: 3 I0508 10:56:33.155942 7 log.go:172] (0xc002f4bb80) (0xc000e020a0) Stream removed, broadcasting: 5 May 8 10:56:33.155: INFO: Exec stderr: "" May 8 10:56:33.155: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:33.156: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:33.156057 7 log.go:172] (0xc002f4bb80) Go away received I0508 10:56:33.187725 7 log.go:172] (0xc0028fa160) (0xc001f5e320) Create stream I0508 10:56:33.187749 7 log.go:172] (0xc0028fa160) (0xc001f5e320) Stream added, broadcasting: 1 I0508 10:56:33.189750 7 log.go:172] (0xc0028fa160) Reply frame received for 1 I0508 10:56:33.189804 7 log.go:172] (0xc0028fa160) (0xc001b5a0a0) Create stream I0508 10:56:33.189822 7 log.go:172] (0xc0028fa160) (0xc001b5a0a0) Stream added, broadcasting: 3 I0508 10:56:33.190994 7 log.go:172] (0xc0028fa160) Reply frame received for 3 I0508 10:56:33.191041 7 log.go:172] (0xc0028fa160) (0xc0001a7360) Create stream I0508 10:56:33.191060 7 log.go:172] (0xc0028fa160) (0xc0001a7360) Stream added, broadcasting: 5 I0508 10:56:33.192219 7 log.go:172] (0xc0028fa160) Reply frame received for 5 I0508 10:56:33.258027 7 log.go:172] (0xc0028fa160) Data frame received for 5 I0508 10:56:33.258080 7 log.go:172] (0xc0001a7360) (5) Data frame handling I0508 10:56:33.258127 7 log.go:172] (0xc0028fa160) Data frame received for 3 I0508 10:56:33.258147 7 log.go:172] (0xc001b5a0a0) (3) Data frame handling I0508 10:56:33.258171 7 log.go:172] (0xc001b5a0a0) (3) Data frame sent I0508 10:56:33.258188 7 log.go:172] (0xc0028fa160) Data frame received for 3 I0508 10:56:33.258205 7 log.go:172] (0xc001b5a0a0) (3) Data frame handling I0508 10:56:33.259709 7 log.go:172] (0xc0028fa160) Data frame received for 1 I0508 10:56:33.259739 7 log.go:172] (0xc001f5e320) (1) Data frame handling I0508 10:56:33.259751 7 log.go:172] (0xc001f5e320) (1) Data frame sent I0508 10:56:33.259768 7 log.go:172] (0xc0028fa160) (0xc001f5e320) Stream removed, broadcasting: 1 I0508 10:56:33.259812 7 log.go:172] (0xc0028fa160) Go away received I0508 10:56:33.259830 7 log.go:172] (0xc0028fa160) (0xc001f5e320) Stream removed, broadcasting: 1 I0508 10:56:33.259851 7 log.go:172] (0xc0028fa160) (0xc001b5a0a0) Stream removed, broadcasting: 3 I0508 10:56:33.259863 7 log.go:172] (0xc0028fa160) (0xc0001a7360) Stream removed, broadcasting: 5 May 8 10:56:33.259: INFO: Exec stderr: "" May 8 10:56:33.259: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:33.259: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:33.287792 7 log.go:172] (0xc0028fa840) (0xc001f5e820) Create stream I0508 10:56:33.287814 7 log.go:172] (0xc0028fa840) (0xc001f5e820) Stream added, broadcasting: 1 I0508 10:56:33.289768 7 log.go:172] (0xc0028fa840) Reply frame received for 1 I0508 10:56:33.289840 7 log.go:172] (0xc0028fa840) (0xc001b5a140) Create stream I0508 10:56:33.289876 7 log.go:172] (0xc0028fa840) (0xc001b5a140) Stream added, broadcasting: 3 I0508 10:56:33.290951 7 log.go:172] (0xc0028fa840) Reply frame received for 3 I0508 10:56:33.290991 7 log.go:172] (0xc0028fa840) (0xc001b5a320) Create stream I0508 10:56:33.291004 7 log.go:172] (0xc0028fa840) (0xc001b5a320) Stream added, broadcasting: 5 I0508 10:56:33.292036 7 log.go:172] (0xc0028fa840) Reply frame received for 5 I0508 10:56:33.354094 7 log.go:172] (0xc0028fa840) Data frame received for 3 I0508 10:56:33.354140 7 log.go:172] (0xc001b5a140) (3) Data frame handling I0508 10:56:33.354164 7 log.go:172] (0xc001b5a140) (3) Data frame sent I0508 10:56:33.354199 7 log.go:172] (0xc0028fa840) Data frame received for 3 I0508 10:56:33.354220 7 log.go:172] (0xc001b5a140) (3) Data frame handling I0508 10:56:33.354247 7 log.go:172] (0xc0028fa840) Data frame received for 5 I0508 10:56:33.354272 7 log.go:172] (0xc001b5a320) (5) Data frame handling I0508 10:56:33.356057 7 log.go:172] (0xc0028fa840) Data frame received for 1 I0508 10:56:33.356075 7 log.go:172] (0xc001f5e820) (1) Data frame handling I0508 10:56:33.356091 7 log.go:172] (0xc001f5e820) (1) Data frame sent I0508 10:56:33.356331 7 log.go:172] (0xc0028fa840) (0xc001f5e820) Stream removed, broadcasting: 1 I0508 10:56:33.356414 7 log.go:172] (0xc0028fa840) (0xc001f5e820) Stream removed, broadcasting: 1 I0508 10:56:33.356433 7 log.go:172] (0xc0028fa840) (0xc001b5a140) Stream removed, broadcasting: 3 I0508 10:56:33.356527 7 log.go:172] (0xc0028fa840) Go away received I0508 10:56:33.356565 7 log.go:172] (0xc0028fa840) (0xc001b5a320) Stream removed, broadcasting: 5 May 8 10:56:33.356: INFO: Exec stderr: "" May 8 10:56:33.356: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4972 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 10:56:33.356: INFO: >>> kubeConfig: /root/.kube/config I0508 10:56:33.387716 7 log.go:172] (0xc000994580) (0xc000b346e0) Create stream I0508 10:56:33.387741 7 log.go:172] (0xc000994580) (0xc000b346e0) Stream added, broadcasting: 1 I0508 10:56:33.394928 7 log.go:172] (0xc000994580) Reply frame received for 1 I0508 10:56:33.395012 7 log.go:172] (0xc000994580) (0xc000b9a0a0) Create stream I0508 10:56:33.395043 7 log.go:172] (0xc000994580) (0xc000b9a0a0) Stream added, broadcasting: 3 I0508 10:56:33.397736 7 log.go:172] (0xc000994580) Reply frame received for 3 I0508 10:56:33.397764 7 log.go:172] (0xc000994580) (0xc001b5a3c0) Create stream I0508 10:56:33.397772 7 log.go:172] (0xc000994580) (0xc001b5a3c0) Stream added, broadcasting: 5 I0508 10:56:33.398496 7 log.go:172] (0xc000994580) Reply frame received for 5 I0508 10:56:33.448695 7 log.go:172] (0xc000994580) Data frame received for 5 I0508 10:56:33.448727 7 log.go:172] (0xc001b5a3c0) (5) Data frame handling I0508 10:56:33.448744 7 log.go:172] (0xc000994580) Data frame received for 3 I0508 10:56:33.448749 7 log.go:172] (0xc000b9a0a0) (3) Data frame handling I0508 10:56:33.448765 7 log.go:172] (0xc000b9a0a0) (3) Data frame sent I0508 10:56:33.448770 7 log.go:172] (0xc000994580) Data frame received for 3 I0508 10:56:33.448777 7 log.go:172] (0xc000b9a0a0) (3) Data frame handling I0508 10:56:33.450641 7 log.go:172] (0xc000994580) Data frame received for 1 I0508 10:56:33.450676 7 log.go:172] (0xc000b346e0) (1) Data frame handling I0508 10:56:33.450706 7 log.go:172] (0xc000b346e0) (1) Data frame sent I0508 10:56:33.450734 7 log.go:172] (0xc000994580) (0xc000b346e0) Stream removed, broadcasting: 1 I0508 10:56:33.450816 7 log.go:172] (0xc000994580) Go away received I0508 10:56:33.450867 7 log.go:172] (0xc000994580) (0xc000b346e0) Stream removed, broadcasting: 1 I0508 10:56:33.450885 7 log.go:172] (0xc000994580) (0xc000b9a0a0) Stream removed, broadcasting: 3 I0508 10:56:33.450896 7 log.go:172] (0xc000994580) (0xc001b5a3c0) Stream removed, broadcasting: 5 May 8 10:56:33.450: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:56:33.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4972" for this suite. • [SLOW TEST:11.220 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":466,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:56:33.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 10:56:34.080: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 10:56:36.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532194, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532194, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532194, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532194, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 10:56:39.241: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 10:56:39.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9158-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:56:40.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3585" for this suite. STEP: Destroying namespace "webhook-3585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.074 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":32,"skipped":478,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:56:40.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 8 10:56:40.652: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9ba2c6d-f089-4d8e-aa96-7ab79c402698" in namespace "projected-6177" to be "Succeeded or Failed" May 8 10:56:40.668: INFO: Pod "downwardapi-volume-e9ba2c6d-f089-4d8e-aa96-7ab79c402698": Phase="Pending", Reason="", readiness=false. Elapsed: 15.865552ms May 8 10:56:42.965: INFO: Pod "downwardapi-volume-e9ba2c6d-f089-4d8e-aa96-7ab79c402698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312163143s May 8 10:56:45.036: INFO: Pod "downwardapi-volume-e9ba2c6d-f089-4d8e-aa96-7ab79c402698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.383861187s STEP: Saw pod success May 8 10:56:45.036: INFO: Pod "downwardapi-volume-e9ba2c6d-f089-4d8e-aa96-7ab79c402698" satisfied condition "Succeeded or Failed" May 8 10:56:45.040: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e9ba2c6d-f089-4d8e-aa96-7ab79c402698 container client-container: STEP: delete the pod May 8 10:56:45.098: INFO: Waiting for pod downwardapi-volume-e9ba2c6d-f089-4d8e-aa96-7ab79c402698 to disappear May 8 10:56:45.306: INFO: Pod downwardapi-volume-e9ba2c6d-f089-4d8e-aa96-7ab79c402698 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:56:45.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6177" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:56:45.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 8 10:56:45.529: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:45.599: INFO: Number of nodes with available pods: 0 May 8 10:56:45.599: INFO: Node kali-worker is running more than one daemon pod May 8 10:56:46.608: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:46.613: INFO: Number of nodes with available pods: 0 May 8 10:56:46.613: INFO: Node kali-worker is running more than one daemon pod May 8 10:56:47.607: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:47.609: INFO: Number of nodes with available pods: 0 May 8 10:56:47.609: INFO: Node kali-worker is running more than one daemon pod May 8 10:56:48.684: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:48.687: INFO: Number of nodes with available pods: 0 May 8 10:56:48.687: INFO: Node kali-worker is running more than one daemon pod May 8 10:56:49.605: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:49.608: INFO: Number of nodes with available pods: 2 May 8 10:56:49.608: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 8 10:56:49.698: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:49.703: INFO: Number of nodes with available pods: 1 May 8 10:56:49.703: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:50.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:50.712: INFO: Number of nodes with available pods: 1 May 8 10:56:50.712: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:51.882: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:51.886: INFO: Number of nodes with available pods: 1 May 8 10:56:51.886: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:52.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:52.712: INFO: Number of nodes with available pods: 1 May 8 10:56:52.712: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:53.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:53.711: INFO: Number of nodes with available pods: 1 May 8 10:56:53.711: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:54.820: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:54.831: INFO: Number of nodes with available pods: 1 May 8 10:56:54.831: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:55.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:55.712: INFO: Number of nodes with available pods: 1 May 8 10:56:55.712: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:56.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:56.711: INFO: Number of nodes with available pods: 1 May 8 10:56:56.711: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:57.709: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:57.715: INFO: Number of nodes with available pods: 1 May 8 10:56:57.715: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:58.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:58.712: INFO: Number of nodes with available pods: 1 May 8 10:56:58.712: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:56:59.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:56:59.712: INFO: Number of nodes with available pods: 1 May 8 10:56:59.712: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:57:00.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:57:00.713: INFO: Number of nodes with available pods: 1 May 8 10:57:00.713: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:57:01.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:57:01.712: INFO: Number of nodes with available pods: 1 May 8 10:57:01.712: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:57:02.725: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:57:02.728: INFO: Number of nodes with available pods: 1 May 8 10:57:02.728: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:57:03.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:57:03.714: INFO: Number of nodes with available pods: 1 May 8 10:57:03.714: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:57:04.880: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:57:04.884: INFO: Number of nodes with available pods: 1 May 8 10:57:04.884: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:57:05.745: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:57:05.911: INFO: Number of nodes with available pods: 1 May 8 10:57:05.911: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:57:06.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:57:06.711: INFO: Number of nodes with available pods: 1 May 8 10:57:06.711: INFO: Node kali-worker2 is running more than one daemon pod May 8 10:57:07.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:57:07.712: INFO: Number of nodes with available pods: 2 May 8 10:57:07.712: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-348, will wait for the garbage collector to delete the pods May 8 10:57:07.773: INFO: Deleting DaemonSet.extensions daemon-set took: 6.345218ms May 8 10:57:08.074: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.238425ms May 8 10:57:23.476: INFO: Number of nodes with available pods: 0 May 8 10:57:23.476: INFO: Number of running nodes: 0, number of available pods: 0 May 8 10:57:23.481: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-348/daemonsets","resourceVersion":"2562126"},"items":null} May 8 10:57:23.484: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-348/pods","resourceVersion":"2562126"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:57:23.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-348" for this suite. • [SLOW TEST:38.204 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":34,"skipped":594,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:57:23.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 10:57:23.600: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-6f8f9587-7bd9-46e1-8f30-afb508435d49" in namespace "security-context-test-114" to be "Succeeded or Failed" May 8 10:57:23.718: INFO: Pod "busybox-readonly-false-6f8f9587-7bd9-46e1-8f30-afb508435d49": Phase="Pending", Reason="", readiness=false. Elapsed: 117.073931ms May 8 10:57:25.721: INFO: Pod "busybox-readonly-false-6f8f9587-7bd9-46e1-8f30-afb508435d49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120877882s May 8 10:57:27.725: INFO: Pod "busybox-readonly-false-6f8f9587-7bd9-46e1-8f30-afb508435d49": Phase="Running", Reason="", readiness=true. Elapsed: 4.124555194s May 8 10:57:29.729: INFO: Pod "busybox-readonly-false-6f8f9587-7bd9-46e1-8f30-afb508435d49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128085403s May 8 10:57:29.729: INFO: Pod "busybox-readonly-false-6f8f9587-7bd9-46e1-8f30-afb508435d49" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:57:29.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-114" for this suite. • [SLOW TEST:6.216 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":611,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:57:29.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:57:45.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1800" for this suite. • [SLOW TEST:16.182 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":36,"skipped":613,"failed":0} SS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:57:45.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 8 10:57:46.027: INFO: Waiting up to 5m0s for pod "downward-api-51381670-586c-4b07-ace1-267593877f59" in namespace "downward-api-7831" to be "Succeeded or Failed" May 8 10:57:46.043: INFO: Pod "downward-api-51381670-586c-4b07-ace1-267593877f59": Phase="Pending", Reason="", readiness=false. Elapsed: 15.178531ms May 8 10:57:48.186: INFO: Pod "downward-api-51381670-586c-4b07-ace1-267593877f59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158969534s May 8 10:57:50.192: INFO: Pod "downward-api-51381670-586c-4b07-ace1-267593877f59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164808848s STEP: Saw pod success May 8 10:57:50.192: INFO: Pod "downward-api-51381670-586c-4b07-ace1-267593877f59" satisfied condition "Succeeded or Failed" May 8 10:57:50.195: INFO: Trying to get logs from node kali-worker pod downward-api-51381670-586c-4b07-ace1-267593877f59 container dapi-container: STEP: delete the pod May 8 10:57:50.420: INFO: Waiting for pod downward-api-51381670-586c-4b07-ace1-267593877f59 to disappear May 8 10:57:50.425: INFO: Pod downward-api-51381670-586c-4b07-ace1-267593877f59 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:57:50.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7831" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:57:50.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD May 8 10:57:50.574: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:58:06.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8162" for this suite. • [SLOW TEST:16.292 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":38,"skipped":681,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:58:06.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC May 8 10:58:06.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8712' May 8 10:58:07.079: INFO: stderr: "" May 8 10:58:07.079: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 8 10:58:08.091: INFO: Selector matched 1 pods for map[app:agnhost] May 8 10:58:08.091: INFO: Found 0 / 1 May 8 10:58:09.282: INFO: Selector matched 1 pods for map[app:agnhost] May 8 10:58:09.282: INFO: Found 0 / 1 May 8 10:58:10.083: INFO: Selector matched 1 pods for map[app:agnhost] May 8 10:58:10.083: INFO: Found 0 / 1 May 8 10:58:11.083: INFO: Selector matched 1 pods for map[app:agnhost] May 8 10:58:11.083: INFO: Found 1 / 1 May 8 10:58:11.083: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 8 10:58:11.086: INFO: Selector matched 1 pods for map[app:agnhost] May 8 10:58:11.086: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 8 10:58:11.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-mpdws --namespace=kubectl-8712 -p {"metadata":{"annotations":{"x":"y"}}}' May 8 10:58:11.195: INFO: stderr: "" May 8 10:58:11.195: INFO: stdout: "pod/agnhost-master-mpdws patched\n" STEP: checking annotations May 8 10:58:11.225: INFO: Selector matched 1 pods for map[app:agnhost] May 8 10:58:11.226: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:58:11.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8712" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":39,"skipped":687,"failed":0} SSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:58:11.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:58:11.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4379" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":40,"skipped":693,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:58:11.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-adeaf2a0-29e6-440a-a60e-a71628ad6c21 STEP: Creating a pod to test consume secrets May 8 10:58:11.673: INFO: Waiting up to 5m0s for pod "pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa" in namespace "secrets-8811" to be "Succeeded or Failed" May 8 10:58:11.684: INFO: Pod "pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.850334ms May 8 10:58:13.731: INFO: Pod "pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058447422s May 8 10:58:15.735: INFO: Pod "pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062022193s May 8 10:58:17.876: INFO: Pod "pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202749919s STEP: Saw pod success May 8 10:58:17.876: INFO: Pod "pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa" satisfied condition "Succeeded or Failed" May 8 10:58:17.892: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa container secret-volume-test: STEP: delete the pod May 8 10:58:18.098: INFO: Waiting for pod pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa to disappear May 8 10:58:18.101: INFO: Pod pod-secrets-6cd4aa1c-0f0b-4791-ae13-5b9035b85caa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:58:18.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8811" for this suite. • [SLOW TEST:6.710 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":701,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:58:18.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 8 10:58:18.219: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-a 67e53ee0-49f6-48b8-997a-70730551352f 2562452 0 2020-05-08 10:58:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 8 10:58:18.219: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-a 67e53ee0-49f6-48b8-997a-70730551352f 2562452 0 2020-05-08 10:58:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 8 10:58:28.229: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-a 67e53ee0-49f6-48b8-997a-70730551352f 2562501 0 2020-05-08 10:58:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 8 10:58:28.229: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-a 67e53ee0-49f6-48b8-997a-70730551352f 2562501 0 2020-05-08 10:58:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 8 10:58:38.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-a 67e53ee0-49f6-48b8-997a-70730551352f 2562531 0 2020-05-08 10:58:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 8 10:58:38.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-a 67e53ee0-49f6-48b8-997a-70730551352f 2562531 0 2020-05-08 10:58:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 8 10:58:48.248: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-a 67e53ee0-49f6-48b8-997a-70730551352f 2562563 0 2020-05-08 10:58:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 8 10:58:48.249: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-a 67e53ee0-49f6-48b8-997a-70730551352f 2562563 0 2020-05-08 10:58:18 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 8 10:58:58.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-b 8081f806-ca58-421d-850d-f424f272ba1c 2562593 0 2020-05-08 10:58:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 8 10:58:58.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-b 8081f806-ca58-421d-850d-f424f272ba1c 2562593 0 2020-05-08 10:58:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 8 10:59:08.264: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-b 8081f806-ca58-421d-850d-f424f272ba1c 2562623 0 2020-05-08 10:58:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 8 10:59:08.265: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7867 /api/v1/namespaces/watch-7867/configmaps/e2e-watch-test-configmap-b 8081f806-ca58-421d-850d-f424f272ba1c 2562623 0 2020-05-08 10:58:58 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-08 10:58:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:59:18.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7867" for this suite. • [SLOW TEST:60.153 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":42,"skipped":703,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:59:18.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 10:59:18.399: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 8 10:59:21.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6982 create -f -' May 8 10:59:24.372: INFO: stderr: "" May 8 10:59:24.372: INFO: stdout: "e2e-test-crd-publish-openapi-4924-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 8 10:59:24.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6982 delete e2e-test-crd-publish-openapi-4924-crds test-cr' May 8 10:59:24.485: INFO: stderr: "" May 8 10:59:24.485: INFO: stdout: "e2e-test-crd-publish-openapi-4924-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 8 10:59:24.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6982 apply -f -' May 8 10:59:24.736: INFO: stderr: "" May 8 10:59:24.736: INFO: stdout: "e2e-test-crd-publish-openapi-4924-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 8 10:59:24.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6982 delete e2e-test-crd-publish-openapi-4924-crds test-cr' May 8 10:59:24.856: INFO: stderr: "" May 8 10:59:24.856: INFO: stdout: "e2e-test-crd-publish-openapi-4924-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 8 10:59:24.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4924-crds' May 8 10:59:25.098: INFO: stderr: "" May 8 10:59:25.098: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4924-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:59:28.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6982" for this suite. • [SLOW TEST:9.760 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":43,"skipped":714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:59:28.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0508 10:59:29.136742 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 10:59:29.136: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:59:29.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8256" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":44,"skipped":737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:59:29.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller May 8 10:59:29.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-363' May 8 10:59:30.252: INFO: stderr: "" May 8 10:59:30.252: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 8 10:59:30.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-363' May 8 10:59:30.448: INFO: stderr: "" May 8 10:59:30.448: INFO: stdout: "update-demo-nautilus-ws5h2 " STEP: Replicas for name=update-demo: expected=2 actual=1 May 8 10:59:35.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-363' May 8 10:59:35.555: INFO: stderr: "" May 8 10:59:35.555: INFO: stdout: "update-demo-nautilus-sskw7 update-demo-nautilus-ws5h2 " May 8 10:59:35.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sskw7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-363' May 8 10:59:35.645: INFO: stderr: "" May 8 10:59:35.645: INFO: stdout: "true" May 8 10:59:35.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sskw7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-363' May 8 10:59:35.740: INFO: stderr: "" May 8 10:59:35.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 10:59:35.740: INFO: validating pod update-demo-nautilus-sskw7 May 8 10:59:35.744: INFO: got data: { "image": "nautilus.jpg" } May 8 10:59:35.744: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 10:59:35.744: INFO: update-demo-nautilus-sskw7 is verified up and running May 8 10:59:35.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ws5h2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-363' May 8 10:59:35.831: INFO: stderr: "" May 8 10:59:35.831: INFO: stdout: "true" May 8 10:59:35.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ws5h2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-363' May 8 10:59:35.926: INFO: stderr: "" May 8 10:59:35.927: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 8 10:59:35.927: INFO: validating pod update-demo-nautilus-ws5h2 May 8 10:59:35.930: INFO: got data: { "image": "nautilus.jpg" } May 8 10:59:35.931: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 8 10:59:35.931: INFO: update-demo-nautilus-ws5h2 is verified up and running STEP: using delete to clean up resources May 8 10:59:35.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-363' May 8 10:59:36.032: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 8 10:59:36.032: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 8 10:59:36.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-363' May 8 10:59:36.125: INFO: stderr: "No resources found in kubectl-363 namespace.\n" May 8 10:59:36.125: INFO: stdout: "" May 8 10:59:36.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-363 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 8 10:59:36.236: INFO: stderr: "" May 8 10:59:36.236: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:59:36.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-363" for this suite. • [SLOW TEST:7.100 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":45,"skipped":769,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:59:36.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 8 10:59:46.471: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 10:59:46.491: INFO: Pod pod-with-poststart-exec-hook still exists May 8 10:59:48.492: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 10:59:48.496: INFO: Pod pod-with-poststart-exec-hook still exists May 8 10:59:50.492: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 10:59:50.497: INFO: Pod pod-with-poststart-exec-hook still exists May 8 10:59:52.492: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 10:59:52.496: INFO: Pod pod-with-poststart-exec-hook still exists May 8 10:59:54.492: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 8 10:59:54.496: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:59:54.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5180" for this suite. • [SLOW TEST:18.261 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:59:54.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 10:59:58.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2512" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":836,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 10:59:58.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 8 10:59:58.779: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:59:58.794: INFO: Number of nodes with available pods: 0 May 8 10:59:58.794: INFO: Node kali-worker is running more than one daemon pod May 8 10:59:59.823: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 10:59:59.826: INFO: Number of nodes with available pods: 0 May 8 10:59:59.826: INFO: Node kali-worker is running more than one daemon pod May 8 11:00:00.805: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:00.808: INFO: Number of nodes with available pods: 0 May 8 11:00:00.808: INFO: Node kali-worker is running more than one daemon pod May 8 11:00:01.860: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:01.863: INFO: Number of nodes with available pods: 0 May 8 11:00:01.863: INFO: Node kali-worker is running more than one daemon pod May 8 11:00:02.799: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:02.801: INFO: Number of nodes with available pods: 1 May 8 11:00:02.801: INFO: Node kali-worker is running more than one daemon pod May 8 11:00:03.798: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:03.800: INFO: Number of nodes with available pods: 2 May 8 11:00:03.800: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 8 11:00:03.843: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:03.917: INFO: Number of nodes with available pods: 1 May 8 11:00:03.917: INFO: Node kali-worker2 is running more than one daemon pod May 8 11:00:04.920: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:04.922: INFO: Number of nodes with available pods: 1 May 8 11:00:04.922: INFO: Node kali-worker2 is running more than one daemon pod May 8 11:00:06.179: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:06.278: INFO: Number of nodes with available pods: 1 May 8 11:00:06.278: INFO: Node kali-worker2 is running more than one daemon pod May 8 11:00:06.922: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:06.926: INFO: Number of nodes with available pods: 1 May 8 11:00:06.926: INFO: Node kali-worker2 is running more than one daemon pod May 8 11:00:07.922: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 8 11:00:07.925: INFO: Number of nodes with available pods: 2 May 8 11:00:07.925: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3556, will wait for the garbage collector to delete the pods May 8 11:00:07.989: INFO: Deleting DaemonSet.extensions daemon-set took: 6.97713ms May 8 11:00:08.290: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.361036ms May 8 11:00:13.794: INFO: Number of nodes with available pods: 0 May 8 11:00:13.794: INFO: Number of running nodes: 0, number of available pods: 0 May 8 11:00:13.797: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3556/daemonsets","resourceVersion":"2563054"},"items":null} May 8 11:00:13.799: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3556/pods","resourceVersion":"2563054"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:00:13.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3556" for this suite. • [SLOW TEST:15.186 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":48,"skipped":844,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:00:13.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c7b3b44f-5787-4acd-95b5-2e587a905fc3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c7b3b44f-5787-4acd-95b5-2e587a905fc3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:00:22.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8087" for this suite. • [SLOW TEST:8.201 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":860,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:00:22.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6899 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-6899 May 8 11:00:22.427: INFO: Found 0 stateful pods, waiting for 1 May 8 11:00:32.430: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 8 11:00:32.506: INFO: Deleting all statefulset in ns statefulset-6899 May 8 11:00:32.549: INFO: Scaling statefulset ss to 0 May 8 11:01:02.635: INFO: Waiting for statefulset status.replicas updated to 0 May 8 11:01:02.639: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:01:02.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6899" for this suite. • [SLOW TEST:40.643 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":50,"skipped":870,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:01:02.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command May 8 11:01:02.807: INFO: Waiting up to 5m0s for pod "var-expansion-9a55d624-f5dd-458a-9c0c-872d0e12d63e" in namespace "var-expansion-2345" to be "Succeeded or Failed" May 8 11:01:02.871: INFO: Pod "var-expansion-9a55d624-f5dd-458a-9c0c-872d0e12d63e": Phase="Pending", Reason="", readiness=false. Elapsed: 63.718996ms May 8 11:01:04.875: INFO: Pod "var-expansion-9a55d624-f5dd-458a-9c0c-872d0e12d63e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067636314s May 8 11:01:06.880: INFO: Pod "var-expansion-9a55d624-f5dd-458a-9c0c-872d0e12d63e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072310249s STEP: Saw pod success May 8 11:01:06.880: INFO: Pod "var-expansion-9a55d624-f5dd-458a-9c0c-872d0e12d63e" satisfied condition "Succeeded or Failed" May 8 11:01:06.883: INFO: Trying to get logs from node kali-worker2 pod var-expansion-9a55d624-f5dd-458a-9c0c-872d0e12d63e container dapi-container: STEP: delete the pod May 8 11:01:06.923: INFO: Waiting for pod var-expansion-9a55d624-f5dd-458a-9c0c-872d0e12d63e to disappear May 8 11:01:07.118: INFO: Pod var-expansion-9a55d624-f5dd-458a-9c0c-872d0e12d63e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:01:07.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2345" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":887,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:01:07.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs May 8 11:01:07.318: INFO: Waiting up to 5m0s for pod "pod-b41d57f8-76eb-4dea-b7a1-64b3690bdf10" in namespace "emptydir-591" to be "Succeeded or Failed" May 8 11:01:07.329: INFO: Pod "pod-b41d57f8-76eb-4dea-b7a1-64b3690bdf10": Phase="Pending", Reason="", readiness=false. Elapsed: 10.488816ms May 8 11:01:09.347: INFO: Pod "pod-b41d57f8-76eb-4dea-b7a1-64b3690bdf10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028343676s May 8 11:01:11.356: INFO: Pod "pod-b41d57f8-76eb-4dea-b7a1-64b3690bdf10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037204846s STEP: Saw pod success May 8 11:01:11.356: INFO: Pod "pod-b41d57f8-76eb-4dea-b7a1-64b3690bdf10" satisfied condition "Succeeded or Failed" May 8 11:01:11.358: INFO: Trying to get logs from node kali-worker pod pod-b41d57f8-76eb-4dea-b7a1-64b3690bdf10 container test-container: STEP: delete the pod May 8 11:01:11.399: INFO: Waiting for pod pod-b41d57f8-76eb-4dea-b7a1-64b3690bdf10 to disappear May 8 11:01:11.406: INFO: Pod pod-b41d57f8-76eb-4dea-b7a1-64b3690bdf10 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:01:11.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-591" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":892,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:01:11.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:01:11.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3362" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:01:11.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8679 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8679 I0508 11:01:11.761352 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8679, replica count: 2 I0508 11:01:14.811867 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 11:01:17.812140 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 11:01:17.812: INFO: Creating new exec pod May 8 11:01:22.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8679 execpodlr2qk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 8 11:01:23.054: INFO: stderr: "I0508 11:01:22.966399 475 log.go:172] (0xc000b24000) (0xc000408000) Create stream\nI0508 11:01:22.966477 475 log.go:172] (0xc000b24000) (0xc000408000) Stream added, broadcasting: 1\nI0508 11:01:22.968990 475 log.go:172] (0xc000b24000) Reply frame received for 1\nI0508 11:01:22.969027 475 log.go:172] (0xc000b24000) (0xc000648000) Create stream\nI0508 11:01:22.969040 475 log.go:172] (0xc000b24000) (0xc000648000) Stream added, broadcasting: 3\nI0508 11:01:22.970092 475 log.go:172] (0xc000b24000) Reply frame received for 3\nI0508 11:01:22.970129 475 log.go:172] (0xc000b24000) (0xc00064a000) Create stream\nI0508 11:01:22.970143 475 log.go:172] (0xc000b24000) (0xc00064a000) Stream added, broadcasting: 5\nI0508 11:01:22.970881 475 log.go:172] (0xc000b24000) Reply frame received for 5\nI0508 11:01:23.046893 475 log.go:172] (0xc000b24000) Data frame received for 3\nI0508 11:01:23.046979 475 log.go:172] (0xc000b24000) Data frame received for 5\nI0508 11:01:23.047031 475 log.go:172] (0xc00064a000) (5) Data frame handling\nI0508 11:01:23.047055 475 log.go:172] (0xc00064a000) (5) Data frame sent\nI0508 11:01:23.047070 475 log.go:172] (0xc000b24000) Data frame received for 5\nI0508 11:01:23.047085 475 log.go:172] (0xc00064a000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0508 11:01:23.047123 475 log.go:172] (0xc000648000) (3) Data frame handling\nI0508 11:01:23.048983 475 log.go:172] (0xc000b24000) Data frame received for 1\nI0508 11:01:23.049000 475 log.go:172] (0xc000408000) (1) Data frame handling\nI0508 11:01:23.049014 475 log.go:172] (0xc000408000) (1) Data frame sent\nI0508 11:01:23.049024 475 log.go:172] (0xc000b24000) (0xc000408000) Stream removed, broadcasting: 1\nI0508 11:01:23.049324 475 log.go:172] (0xc000b24000) Go away received\nI0508 11:01:23.049406 475 log.go:172] (0xc000b24000) (0xc000408000) Stream removed, broadcasting: 1\nI0508 11:01:23.049431 475 log.go:172] (0xc000b24000) (0xc000648000) Stream removed, broadcasting: 3\nI0508 11:01:23.049439 475 log.go:172] (0xc000b24000) (0xc00064a000) Stream removed, broadcasting: 5\n" May 8 11:01:23.054: INFO: stdout: "" May 8 11:01:23.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8679 execpodlr2qk -- /bin/sh -x -c nc -zv -t -w 2 10.98.241.52 80' May 8 11:01:23.245: INFO: stderr: "I0508 11:01:23.178083 495 log.go:172] (0xc00003a790) (0xc0008014a0) Create stream\nI0508 11:01:23.178144 495 log.go:172] (0xc00003a790) (0xc0008014a0) Stream added, broadcasting: 1\nI0508 11:01:23.181067 495 log.go:172] (0xc00003a790) Reply frame received for 1\nI0508 11:01:23.181135 495 log.go:172] (0xc00003a790) (0xc000686000) Create stream\nI0508 11:01:23.181162 495 log.go:172] (0xc00003a790) (0xc000686000) Stream added, broadcasting: 3\nI0508 11:01:23.182107 495 log.go:172] (0xc00003a790) Reply frame received for 3\nI0508 11:01:23.182146 495 log.go:172] (0xc00003a790) (0xc000801540) Create stream\nI0508 11:01:23.182162 495 log.go:172] (0xc00003a790) (0xc000801540) Stream added, broadcasting: 5\nI0508 11:01:23.183165 495 log.go:172] (0xc00003a790) Reply frame received for 5\nI0508 11:01:23.239276 495 log.go:172] (0xc00003a790) Data frame received for 3\nI0508 11:01:23.239338 495 log.go:172] (0xc00003a790) Data frame received for 5\nI0508 11:01:23.239381 495 log.go:172] (0xc000801540) (5) Data frame handling\nI0508 11:01:23.239400 495 log.go:172] (0xc000801540) (5) Data frame sent\nI0508 11:01:23.239412 495 log.go:172] (0xc00003a790) Data frame received for 5\nI0508 11:01:23.239434 495 log.go:172] (0xc000801540) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.241.52 80\nConnection to 10.98.241.52 80 port [tcp/http] succeeded!\nI0508 11:01:23.239466 495 log.go:172] (0xc000686000) (3) Data frame handling\nI0508 11:01:23.240857 495 log.go:172] (0xc00003a790) Data frame received for 1\nI0508 11:01:23.240886 495 log.go:172] (0xc0008014a0) (1) Data frame handling\nI0508 11:01:23.240907 495 log.go:172] (0xc0008014a0) (1) Data frame sent\nI0508 11:01:23.240923 495 log.go:172] (0xc00003a790) (0xc0008014a0) Stream removed, broadcasting: 1\nI0508 11:01:23.241322 495 log.go:172] (0xc00003a790) Go away received\nI0508 11:01:23.241516 495 log.go:172] (0xc00003a790) (0xc0008014a0) Stream removed, broadcasting: 1\nI0508 11:01:23.241537 495 log.go:172] (0xc00003a790) (0xc000686000) Stream removed, broadcasting: 3\nI0508 11:01:23.241549 495 log.go:172] (0xc00003a790) (0xc000801540) Stream removed, broadcasting: 5\n" May 8 11:01:23.245: INFO: stdout: "" May 8 11:01:23.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8679 execpodlr2qk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 32620' May 8 11:01:23.452: INFO: stderr: "I0508 11:01:23.371027 516 log.go:172] (0xc000abe000) (0xc0009a6000) Create stream\nI0508 11:01:23.371079 516 log.go:172] (0xc000abe000) (0xc0009a6000) Stream added, broadcasting: 1\nI0508 11:01:23.376205 516 log.go:172] (0xc000abe000) Reply frame received for 1\nI0508 11:01:23.376306 516 log.go:172] (0xc000abe000) (0xc0003baaa0) Create stream\nI0508 11:01:23.376331 516 log.go:172] (0xc000abe000) (0xc0003baaa0) Stream added, broadcasting: 3\nI0508 11:01:23.383125 516 log.go:172] (0xc000abe000) Reply frame received for 3\nI0508 11:01:23.383160 516 log.go:172] (0xc000abe000) (0xc000a40000) Create stream\nI0508 11:01:23.383167 516 log.go:172] (0xc000abe000) (0xc000a40000) Stream added, broadcasting: 5\nI0508 11:01:23.384054 516 log.go:172] (0xc000abe000) Reply frame received for 5\nI0508 11:01:23.445015 516 log.go:172] (0xc000abe000) Data frame received for 3\nI0508 11:01:23.445096 516 log.go:172] (0xc0003baaa0) (3) Data frame handling\nI0508 11:01:23.445328 516 log.go:172] (0xc000abe000) Data frame received for 5\nI0508 11:01:23.445356 516 log.go:172] (0xc000a40000) (5) Data frame handling\nI0508 11:01:23.445388 516 log.go:172] (0xc000a40000) (5) Data frame sent\nI0508 11:01:23.445403 516 log.go:172] (0xc000abe000) Data frame received for 5\nI0508 11:01:23.445413 516 log.go:172] (0xc000a40000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 32620\nConnection to 172.17.0.15 32620 port [tcp/32620] succeeded!\nI0508 11:01:23.446906 516 log.go:172] (0xc000abe000) Data frame received for 1\nI0508 11:01:23.446936 516 log.go:172] (0xc0009a6000) (1) Data frame handling\nI0508 11:01:23.446968 516 log.go:172] (0xc0009a6000) (1) Data frame sent\nI0508 11:01:23.447153 516 log.go:172] (0xc000abe000) (0xc0009a6000) Stream removed, broadcasting: 1\nI0508 11:01:23.447284 516 log.go:172] (0xc000abe000) Go away received\nI0508 11:01:23.447720 516 log.go:172] (0xc000abe000) (0xc0009a6000) Stream removed, broadcasting: 1\nI0508 11:01:23.447748 516 log.go:172] (0xc000abe000) (0xc0003baaa0) Stream removed, broadcasting: 3\nI0508 11:01:23.447767 516 log.go:172] (0xc000abe000) (0xc000a40000) Stream removed, broadcasting: 5\n" May 8 11:01:23.452: INFO: stdout: "" May 8 11:01:23.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8679 execpodlr2qk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 32620' May 8 11:01:23.635: INFO: stderr: "I0508 11:01:23.571997 538 log.go:172] (0xc000810000) (0xc000805360) Create stream\nI0508 11:01:23.572061 538 log.go:172] (0xc000810000) (0xc000805360) Stream added, broadcasting: 1\nI0508 11:01:23.574982 538 log.go:172] (0xc000810000) Reply frame received for 1\nI0508 11:01:23.575038 538 log.go:172] (0xc000810000) (0xc000918000) Create stream\nI0508 11:01:23.575055 538 log.go:172] (0xc000810000) (0xc000918000) Stream added, broadcasting: 3\nI0508 11:01:23.576163 538 log.go:172] (0xc000810000) Reply frame received for 3\nI0508 11:01:23.576198 538 log.go:172] (0xc000810000) (0xc000805400) Create stream\nI0508 11:01:23.576216 538 log.go:172] (0xc000810000) (0xc000805400) Stream added, broadcasting: 5\nI0508 11:01:23.577342 538 log.go:172] (0xc000810000) Reply frame received for 5\nI0508 11:01:23.628873 538 log.go:172] (0xc000810000) Data frame received for 3\nI0508 11:01:23.628913 538 log.go:172] (0xc000918000) (3) Data frame handling\nI0508 11:01:23.628937 538 log.go:172] (0xc000810000) Data frame received for 5\nI0508 11:01:23.628947 538 log.go:172] (0xc000805400) (5) Data frame handling\nI0508 11:01:23.628958 538 log.go:172] (0xc000805400) (5) Data frame sent\nI0508 11:01:23.628968 538 log.go:172] (0xc000810000) Data frame received for 5\nI0508 11:01:23.628977 538 log.go:172] (0xc000805400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 32620\nConnection to 172.17.0.18 32620 port [tcp/32620] succeeded!\nI0508 11:01:23.630786 538 log.go:172] (0xc000810000) Data frame received for 1\nI0508 11:01:23.630823 538 log.go:172] (0xc000805360) (1) Data frame handling\nI0508 11:01:23.630849 538 log.go:172] (0xc000805360) (1) Data frame sent\nI0508 11:01:23.630865 538 log.go:172] (0xc000810000) (0xc000805360) Stream removed, broadcasting: 1\nI0508 11:01:23.630880 538 log.go:172] (0xc000810000) Go away received\nI0508 11:01:23.631250 538 log.go:172] (0xc000810000) (0xc000805360) Stream removed, broadcasting: 1\nI0508 11:01:23.631272 538 log.go:172] (0xc000810000) (0xc000918000) Stream removed, broadcasting: 3\nI0508 11:01:23.631283 538 log.go:172] (0xc000810000) (0xc000805400) Stream removed, broadcasting: 5\n" May 8 11:01:23.636: INFO: stdout: "" May 8 11:01:23.636: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:01:23.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8679" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.108 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":54,"skipped":950,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:01:23.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 11:01:23.917: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"76ae5553-8dd3-46bf-811e-a13083a78b97", Controller:(*bool)(0xc0047f606a), BlockOwnerDeletion:(*bool)(0xc0047f606b)}} May 8 11:01:23.940: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"479c0d18-0a66-4b0d-9496-f055a7057a76", Controller:(*bool)(0xc0047f634a), BlockOwnerDeletion:(*bool)(0xc0047f634b)}} May 8 11:01:23.982: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"331b4511-52fc-4894-8009-a3c4b3d2875b", Controller:(*bool)(0xc0047f667a), BlockOwnerDeletion:(*bool)(0xc0047f667b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:01:29.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6075" for this suite. • [SLOW TEST:5.333 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":55,"skipped":958,"failed":0} SSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:01:29.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:01:29.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4514" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":56,"skipped":961,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:01:29.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-b817db87-2660-4141-b07b-5149f3f4a7d0 in namespace container-probe-6193 May 8 11:01:33.958: INFO: Started pod busybox-b817db87-2660-4141-b07b-5149f3f4a7d0 in namespace container-probe-6193 STEP: checking the pod's current state and verifying that restartCount is present May 8 11:01:33.961: INFO: Initial restart count of pod busybox-b817db87-2660-4141-b07b-5149f3f4a7d0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:05:34.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6193" for this suite. • [SLOW TEST:245.236 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":980,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:05:35.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-fe886ddf-47b4-49b5-9486-65f354e189f3 STEP: Creating a pod to test consume configMaps May 8 11:05:35.192: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-584a899a-cb28-44c1-b547-9961e62f96ce" in namespace "projected-2459" to be "Succeeded or Failed" May 8 11:05:35.211: INFO: Pod "pod-projected-configmaps-584a899a-cb28-44c1-b547-9961e62f96ce": Phase="Pending", Reason="", readiness=false. Elapsed: 19.119266ms May 8 11:05:37.238: INFO: Pod "pod-projected-configmaps-584a899a-cb28-44c1-b547-9961e62f96ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045707818s May 8 11:05:39.255: INFO: Pod "pod-projected-configmaps-584a899a-cb28-44c1-b547-9961e62f96ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062601726s STEP: Saw pod success May 8 11:05:39.255: INFO: Pod "pod-projected-configmaps-584a899a-cb28-44c1-b547-9961e62f96ce" satisfied condition "Succeeded or Failed" May 8 11:05:39.258: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-584a899a-cb28-44c1-b547-9961e62f96ce container projected-configmap-volume-test: STEP: delete the pod May 8 11:05:39.288: INFO: Waiting for pod pod-projected-configmaps-584a899a-cb28-44c1-b547-9961e62f96ce to disappear May 8 11:05:39.293: INFO: Pod pod-projected-configmaps-584a899a-cb28-44c1-b547-9961e62f96ce no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:05:39.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2459" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":987,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:05:39.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 11:05:39.804: INFO: Waiting up to 5m0s for pod "busybox-user-65534-da764b7e-c4cb-4600-8723-71eb5db11301" in namespace "security-context-test-4234" to be "Succeeded or Failed" May 8 11:05:39.886: INFO: Pod "busybox-user-65534-da764b7e-c4cb-4600-8723-71eb5db11301": Phase="Pending", Reason="", readiness=false. Elapsed: 82.37832ms May 8 11:05:41.891: INFO: Pod "busybox-user-65534-da764b7e-c4cb-4600-8723-71eb5db11301": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086608727s May 8 11:05:43.895: INFO: Pod "busybox-user-65534-da764b7e-c4cb-4600-8723-71eb5db11301": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090652907s May 8 11:05:45.898: INFO: Pod "busybox-user-65534-da764b7e-c4cb-4600-8723-71eb5db11301": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094598024s May 8 11:05:45.899: INFO: Pod "busybox-user-65534-da764b7e-c4cb-4600-8723-71eb5db11301" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:05:45.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4234" for this suite. • [SLOW TEST:6.601 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":997,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:05:45.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 11:05:46.538: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 11:05:48.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532746, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532746, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:05:50.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532746, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532746, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532746, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 11:05:53.599: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:05:53.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1880" for this suite. STEP: Destroying namespace "webhook-1880-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.272 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":60,"skipped":998,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:05:54.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 8 11:05:55.666: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 8 11:05:58.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532755, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532755, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532755, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532755, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 8 11:06:00.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532755, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532755, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532755, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724532755, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 8 11:06:03.339: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:06:03.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7032" for this suite. STEP: Destroying namespace "webhook-7032-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.786 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":61,"skipped":1013,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:06:03.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod May 8 11:06:04.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-4870 -- logs-generator --log-lines-total 100 --run-duration 20s' May 8 11:06:04.323: INFO: stderr: "" May 8 11:06:04.323: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. May 8 11:06:04.323: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 8 11:06:04.323: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4870" to be "running and ready, or succeeded" May 8 11:06:04.344: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.293231ms May 8 11:06:06.387: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063764831s May 8 11:06:08.391: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.067713616s May 8 11:06:08.391: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 8 11:06:08.391: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 8 11:06:08.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4870' May 8 11:06:08.520: INFO: stderr: "" May 8 11:06:08.520: INFO: stdout: "I0508 11:06:07.008598 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/xkx 570\nI0508 11:06:07.209638 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/h6g 371\nI0508 11:06:07.408783 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/ck4 339\nI0508 11:06:07.608761 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/hbp 411\nI0508 11:06:07.808827 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/8nc 575\nI0508 11:06:08.008708 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/q6v 569\nI0508 11:06:08.208727 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/xbh 597\nI0508 11:06:08.408778 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/jhb 404\n" STEP: limiting log lines May 8 11:06:08.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4870 --tail=1' May 8 11:06:08.637: INFO: stderr: "" May 8 11:06:08.637: INFO: stdout: "I0508 11:06:08.608718 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/jb9j 370\n" May 8 11:06:08.637: INFO: got output "I0508 11:06:08.608718 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/jb9j 370\n" STEP: limiting log bytes May 8 11:06:08.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4870 --limit-bytes=1' May 8 11:06:08.751: INFO: stderr: "" May 8 11:06:08.751: INFO: stdout: "I" May 8 11:06:08.751: INFO: got output "I" STEP: exposing timestamps May 8 11:06:08.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4870 --tail=1 --timestamps' May 8 11:06:08.856: INFO: stderr: "" May 8 11:06:08.856: INFO: stdout: "2020-05-08T11:06:08.808923454Z I0508 11:06:08.808738 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/7nm5 513\n" May 8 11:06:08.856: INFO: got output "2020-05-08T11:06:08.808923454Z I0508 11:06:08.808738 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/7nm5 513\n" STEP: restricting to a time range May 8 11:06:11.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4870 --since=1s' May 8 11:06:11.472: INFO: stderr: "" May 8 11:06:11.472: INFO: stdout: "I0508 11:06:10.608756 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/msm 354\nI0508 11:06:10.808804 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/mj2 216\nI0508 11:06:11.008808 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/597r 577\nI0508 11:06:11.208768 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/c9cs 422\nI0508 11:06:11.408790 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/2q7 359\n" May 8 11:06:11.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4870 --since=24h' May 8 11:06:11.574: INFO: stderr: "" May 8 11:06:11.574: INFO: stdout: "I0508 11:06:07.008598 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/xkx 570\nI0508 11:06:07.209638 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/h6g 371\nI0508 11:06:07.408783 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/ck4 339\nI0508 11:06:07.608761 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/hbp 411\nI0508 11:06:07.808827 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/8nc 575\nI0508 11:06:08.008708 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/q6v 569\nI0508 11:06:08.208727 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/xbh 597\nI0508 11:06:08.408778 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/jhb 404\nI0508 11:06:08.608718 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/jb9j 370\nI0508 11:06:08.808738 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/7nm5 513\nI0508 11:06:09.008774 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/nwb5 442\nI0508 11:06:09.208758 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/dcr 276\nI0508 11:06:09.408766 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/7cjm 286\nI0508 11:06:09.608735 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/tmg 570\nI0508 11:06:09.808785 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/hnl 550\nI0508 11:06:10.008892 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/d4t 465\nI0508 11:06:10.208760 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/h7zn 485\nI0508 11:06:10.408761 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/f2tr 465\nI0508 11:06:10.608756 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/msm 354\nI0508 11:06:10.808804 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/mj2 216\nI0508 11:06:11.008808 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/597r 577\nI0508 11:06:11.208768 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/c9cs 422\nI0508 11:06:11.408790 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/2q7 359\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 May 8 11:06:11.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4870' May 8 11:06:23.874: INFO: stderr: "" May 8 11:06:23.874: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:06:23.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4870" for this suite. • [SLOW TEST:19.915 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":62,"skipped":1015,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:06:23.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:07:23.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6784" for this suite. • [SLOW TEST:60.092 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1020,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:07:23.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6919 STEP: creating a selector STEP: Creating the service pods in kubernetes May 8 11:07:24.051: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 8 11:07:24.185: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 8 11:07:26.471: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 8 11:07:28.214: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 8 11:07:30.189: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 11:07:32.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 11:07:34.189: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 11:07:36.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 11:07:38.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 11:07:40.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 11:07:42.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 11:07:44.189: INFO: The status of Pod netserver-0 is Running (Ready = false) May 8 11:07:46.190: INFO: The status of Pod netserver-0 is Running (Ready = true) May 8 11:07:46.196: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 8 11:07:52.223: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.145:8080/dial?request=hostname&protocol=udp&host=10.244.2.144&port=8081&tries=1'] Namespace:pod-network-test-6919 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 11:07:52.223: INFO: >>> kubeConfig: /root/.kube/config I0508 11:07:52.256344 7 log.go:172] (0xc002c84000) (0xc0014eb540) Create stream I0508 11:07:52.256379 7 log.go:172] (0xc002c84000) (0xc0014eb540) Stream added, broadcasting: 1 I0508 11:07:52.259277 7 log.go:172] (0xc002c84000) Reply frame received for 1 I0508 11:07:52.259318 7 log.go:172] (0xc002c84000) (0xc000fd8000) Create stream I0508 11:07:52.259333 7 log.go:172] (0xc002c84000) (0xc000fd8000) Stream added, broadcasting: 3 I0508 11:07:52.260492 7 log.go:172] (0xc002c84000) Reply frame received for 3 I0508 11:07:52.260522 7 log.go:172] (0xc002c84000) (0xc002bb8320) Create stream I0508 11:07:52.260533 7 log.go:172] (0xc002c84000) (0xc002bb8320) Stream added, broadcasting: 5 I0508 11:07:52.261571 7 log.go:172] (0xc002c84000) Reply frame received for 5 I0508 11:07:52.348980 7 log.go:172] (0xc002c84000) Data frame received for 3 I0508 11:07:52.349018 7 log.go:172] (0xc000fd8000) (3) Data frame handling I0508 11:07:52.349041 7 log.go:172] (0xc000fd8000) (3) Data frame sent I0508 11:07:52.349306 7 log.go:172] (0xc002c84000) Data frame received for 3 I0508 11:07:52.349330 7 log.go:172] (0xc000fd8000) (3) Data frame handling I0508 11:07:52.349499 7 log.go:172] (0xc002c84000) Data frame received for 5 I0508 11:07:52.349517 7 log.go:172] (0xc002bb8320) (5) Data frame handling I0508 11:07:52.350887 7 log.go:172] (0xc002c84000) Data frame received for 1 I0508 11:07:52.350912 7 log.go:172] (0xc0014eb540) (1) Data frame handling I0508 11:07:52.350929 7 log.go:172] (0xc0014eb540) (1) Data frame sent I0508 11:07:52.350949 7 log.go:172] (0xc002c84000) (0xc0014eb540) Stream removed, broadcasting: 1 I0508 11:07:52.350987 7 log.go:172] (0xc002c84000) Go away received I0508 11:07:52.351022 7 log.go:172] (0xc002c84000) (0xc0014eb540) Stream removed, broadcasting: 1 I0508 11:07:52.351041 7 log.go:172] (0xc002c84000) (0xc000fd8000) Stream removed, broadcasting: 3 I0508 11:07:52.351049 7 log.go:172] (0xc002c84000) (0xc002bb8320) Stream removed, broadcasting: 5 May 8 11:07:52.351: INFO: Waiting for responses: map[] May 8 11:07:52.354: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.145:8080/dial?request=hostname&protocol=udp&host=10.244.1.192&port=8081&tries=1'] Namespace:pod-network-test-6919 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 8 11:07:52.354: INFO: >>> kubeConfig: /root/.kube/config I0508 11:07:52.384198 7 log.go:172] (0xc0028fa580) (0xc000fd8b40) Create stream I0508 11:07:52.384232 7 log.go:172] (0xc0028fa580) (0xc000fd8b40) Stream added, broadcasting: 1 I0508 11:07:52.387245 7 log.go:172] (0xc0028fa580) Reply frame received for 1 I0508 11:07:52.387298 7 log.go:172] (0xc0028fa580) (0xc0009f8dc0) Create stream I0508 11:07:52.387315 7 log.go:172] (0xc0028fa580) (0xc0009f8dc0) Stream added, broadcasting: 3 I0508 11:07:52.388468 7 log.go:172] (0xc0028fa580) Reply frame received for 3 I0508 11:07:52.388514 7 log.go:172] (0xc0028fa580) (0xc002bb8460) Create stream I0508 11:07:52.388531 7 log.go:172] (0xc0028fa580) (0xc002bb8460) Stream added, broadcasting: 5 I0508 11:07:52.389860 7 log.go:172] (0xc0028fa580) Reply frame received for 5 I0508 11:07:52.452805 7 log.go:172] (0xc0028fa580) Data frame received for 3 I0508 11:07:52.452837 7 log.go:172] (0xc0009f8dc0) (3) Data frame handling I0508 11:07:52.452857 7 log.go:172] (0xc0009f8dc0) (3) Data frame sent I0508 11:07:52.453306 7 log.go:172] (0xc0028fa580) Data frame received for 3 I0508 11:07:52.453362 7 log.go:172] (0xc0009f8dc0) (3) Data frame handling I0508 11:07:52.453437 7 log.go:172] (0xc0028fa580) Data frame received for 5 I0508 11:07:52.453475 7 log.go:172] (0xc002bb8460) (5) Data frame handling I0508 11:07:52.455204 7 log.go:172] (0xc0028fa580) Data frame received for 1 I0508 11:07:52.455239 7 log.go:172] (0xc000fd8b40) (1) Data frame handling I0508 11:07:52.455272 7 log.go:172] (0xc000fd8b40) (1) Data frame sent I0508 11:07:52.455294 7 log.go:172] (0xc0028fa580) (0xc000fd8b40) Stream removed, broadcasting: 1 I0508 11:07:52.455327 7 log.go:172] (0xc0028fa580) Go away received I0508 11:07:52.455491 7 log.go:172] (0xc0028fa580) (0xc000fd8b40) Stream removed, broadcasting: 1 I0508 11:07:52.455524 7 log.go:172] (0xc0028fa580) (0xc0009f8dc0) Stream removed, broadcasting: 3 I0508 11:07:52.455550 7 log.go:172] (0xc0028fa580) (0xc002bb8460) Stream removed, broadcasting: 5 May 8 11:07:52.455: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:07:52.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6919" for this suite. • [SLOW TEST:28.488 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1113,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:07:52.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 11:07:52.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9113' May 8 11:07:52.653: INFO: stderr: "" May 8 11:07:52.653: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 May 8 11:07:52.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9113' May 8 11:07:56.470: INFO: stderr: "" May 8 11:07:56.470: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:07:56.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9113" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":65,"skipped":1121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:07:56.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 8 11:07:56.652: INFO: Pod name pod-release: Found 0 pods out of 1 May 8 11:08:01.681: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:08:01.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3099" for this suite. • [SLOW TEST:5.626 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":66,"skipped":1154,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:08:02.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-9680 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9680 STEP: Deleting pre-stop pod May 8 11:08:17.627: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:08:17.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9680" for this suite. • [SLOW TEST:15.583 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":67,"skipped":1159,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:08:17.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2302.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2302.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 11:08:24.081: INFO: DNS probes using dns-2302/dns-test-b6136855-2229-4111-8c59-b596f234f12b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:08:24.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2302" for this suite. • [SLOW TEST:6.487 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":68,"skipped":1180,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:08:24.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:08:28.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6156" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:08:28.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 8 11:08:28.760: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:08:43.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3488" for this suite. • [SLOW TEST:14.740 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:08:43.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:08:43.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9662" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":71,"skipped":1242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:08:43.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 8 11:08:48.306: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:08:48.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6502" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1313,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:08:48.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions May 8 11:08:48.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions' May 8 11:08:48.682: INFO: stderr: "" May 8 11:08:48.682: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:08:48.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2633" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":73,"skipped":1315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:08:48.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-rjzv STEP: Creating a pod to test atomic-volume-subpath May 8 11:08:48.898: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rjzv" in namespace "subpath-2702" to be "Succeeded or Failed" May 8 11:08:48.938: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 40.097906ms May 8 11:08:50.943: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044643448s May 8 11:08:52.947: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 4.049068325s May 8 11:08:54.951: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 6.053487497s May 8 11:08:57.024: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 8.125706771s May 8 11:08:59.028: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 10.130474216s May 8 11:09:01.033: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 12.13461486s May 8 11:09:03.037: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 14.139317758s May 8 11:09:05.041: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 16.142795235s May 8 11:09:07.045: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 18.146547358s May 8 11:09:09.049: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 20.150563083s May 8 11:09:11.054: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Running", Reason="", readiness=true. Elapsed: 22.155522338s May 8 11:09:13.058: INFO: Pod "pod-subpath-test-projected-rjzv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.16006404s STEP: Saw pod success May 8 11:09:13.058: INFO: Pod "pod-subpath-test-projected-rjzv" satisfied condition "Succeeded or Failed" May 8 11:09:13.062: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-rjzv container test-container-subpath-projected-rjzv: STEP: delete the pod May 8 11:09:13.124: INFO: Waiting for pod pod-subpath-test-projected-rjzv to disappear May 8 11:09:13.130: INFO: Pod pod-subpath-test-projected-rjzv no longer exists STEP: Deleting pod pod-subpath-test-projected-rjzv May 8 11:09:13.130: INFO: Deleting pod "pod-subpath-test-projected-rjzv" in namespace "subpath-2702" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:09:13.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2702" for this suite. • [SLOW TEST:24.454 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":74,"skipped":1357,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:09:13.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-f1dd63a4-f007-478e-afa1-820bc6e73750 STEP: Creating a pod to test consume secrets May 8 11:09:13.359: INFO: Waiting up to 5m0s for pod "pod-secrets-7cd5a2ae-8439-4c06-83d4-4da63eb826f1" in namespace "secrets-9656" to be "Succeeded or Failed" May 8 11:09:13.376: INFO: Pod "pod-secrets-7cd5a2ae-8439-4c06-83d4-4da63eb826f1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.927653ms May 8 11:09:15.380: INFO: Pod "pod-secrets-7cd5a2ae-8439-4c06-83d4-4da63eb826f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020837212s May 8 11:09:17.384: INFO: Pod "pod-secrets-7cd5a2ae-8439-4c06-83d4-4da63eb826f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024753683s STEP: Saw pod success May 8 11:09:17.384: INFO: Pod "pod-secrets-7cd5a2ae-8439-4c06-83d4-4da63eb826f1" satisfied condition "Succeeded or Failed" May 8 11:09:17.386: INFO: Trying to get logs from node kali-worker pod pod-secrets-7cd5a2ae-8439-4c06-83d4-4da63eb826f1 container secret-volume-test: STEP: delete the pod May 8 11:09:17.498: INFO: Waiting for pod pod-secrets-7cd5a2ae-8439-4c06-83d4-4da63eb826f1 to disappear May 8 11:09:17.802: INFO: Pod pod-secrets-7cd5a2ae-8439-4c06-83d4-4da63eb826f1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:09:17.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9656" for this suite. STEP: Destroying namespace "secret-namespace-6750" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1370,"failed":0} SSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:09:18.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-286, will wait for the garbage collector to delete the pods May 8 11:09:24.455: INFO: Deleting Job.batch foo took: 7.419956ms May 8 11:09:24.855: INFO: Terminating Job.batch foo pods took: 400.241519ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:10:03.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-286" for this suite. • [SLOW TEST:45.660 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":76,"skipped":1374,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:10:03.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 8 11:10:03.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6de9b41-66d8-4226-bc5f-1cdf0d75d253" in namespace "projected-8526" to be "Succeeded or Failed" May 8 11:10:03.886: INFO: Pod "downwardapi-volume-a6de9b41-66d8-4226-bc5f-1cdf0d75d253": Phase="Pending", Reason="", readiness=false. Elapsed: 58.271129ms May 8 11:10:05.891: INFO: Pod "downwardapi-volume-a6de9b41-66d8-4226-bc5f-1cdf0d75d253": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06267831s May 8 11:10:07.896: INFO: Pod "downwardapi-volume-a6de9b41-66d8-4226-bc5f-1cdf0d75d253": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068280785s STEP: Saw pod success May 8 11:10:07.896: INFO: Pod "downwardapi-volume-a6de9b41-66d8-4226-bc5f-1cdf0d75d253" satisfied condition "Succeeded or Failed" May 8 11:10:07.901: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-a6de9b41-66d8-4226-bc5f-1cdf0d75d253 container client-container: STEP: delete the pod May 8 11:10:07.916: INFO: Waiting for pod downwardapi-volume-a6de9b41-66d8-4226-bc5f-1cdf0d75d253 to disappear May 8 11:10:07.934: INFO: Pod downwardapi-volume-a6de9b41-66d8-4226-bc5f-1cdf0d75d253 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:10:07.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8526" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1375,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:10:07.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 8 11:10:08.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d39117e1-f09f-4796-9c1c-0911244ab964" in namespace "projected-8654" to be "Succeeded or Failed" May 8 11:10:08.072: INFO: Pod "downwardapi-volume-d39117e1-f09f-4796-9c1c-0911244ab964": Phase="Pending", Reason="", readiness=false. Elapsed: 22.713434ms May 8 11:10:10.077: INFO: Pod "downwardapi-volume-d39117e1-f09f-4796-9c1c-0911244ab964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026807827s May 8 11:10:12.081: INFO: Pod "downwardapi-volume-d39117e1-f09f-4796-9c1c-0911244ab964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031171355s STEP: Saw pod success May 8 11:10:12.081: INFO: Pod "downwardapi-volume-d39117e1-f09f-4796-9c1c-0911244ab964" satisfied condition "Succeeded or Failed" May 8 11:10:12.083: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d39117e1-f09f-4796-9c1c-0911244ab964 container client-container: STEP: delete the pod May 8 11:10:12.254: INFO: Waiting for pod downwardapi-volume-d39117e1-f09f-4796-9c1c-0911244ab964 to disappear May 8 11:10:12.425: INFO: Pod downwardapi-volume-d39117e1-f09f-4796-9c1c-0911244ab964 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:10:12.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8654" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1385,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:10:12.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 8 11:10:19.675: INFO: 0 pods remaining May 8 11:10:19.675: INFO: 0 pods has nil DeletionTimestamp May 8 11:10:19.675: INFO: May 8 11:10:20.759: INFO: 0 pods remaining May 8 11:10:20.759: INFO: 0 pods has nil DeletionTimestamp May 8 11:10:20.759: INFO: May 8 11:10:22.016: INFO: 0 pods remaining May 8 11:10:22.016: INFO: 0 pods has nil DeletionTimestamp May 8 11:10:22.016: INFO: STEP: Gathering metrics W0508 11:10:23.055030 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 8 11:10:23.055: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:10:23.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4917" for this suite. • [SLOW TEST:10.690 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":79,"skipped":1405,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:10:23.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7775 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7775 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7775 May 8 11:10:23.655: INFO: Found 0 stateful pods, waiting for 1 May 8 11:10:33.660: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 8 11:10:33.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 11:10:36.697: INFO: stderr: "I0508 11:10:36.553835 783 log.go:172] (0xc0000e6370) (0xc0003e8000) Create stream\nI0508 11:10:36.553874 783 log.go:172] (0xc0000e6370) (0xc0003e8000) Stream added, broadcasting: 1\nI0508 11:10:36.556089 783 log.go:172] (0xc0000e6370) Reply frame received for 1\nI0508 11:10:36.556134 783 log.go:172] (0xc0000e6370) (0xc0006a5220) Create stream\nI0508 11:10:36.556152 783 log.go:172] (0xc0000e6370) (0xc0006a5220) Stream added, broadcasting: 3\nI0508 11:10:36.557280 783 log.go:172] (0xc0000e6370) Reply frame received for 3\nI0508 11:10:36.557329 783 log.go:172] (0xc0000e6370) (0xc00016c000) Create stream\nI0508 11:10:36.557340 783 log.go:172] (0xc0000e6370) (0xc00016c000) Stream added, broadcasting: 5\nI0508 11:10:36.558400 783 log.go:172] (0xc0000e6370) Reply frame received for 5\nI0508 11:10:36.660544 783 log.go:172] (0xc0000e6370) Data frame received for 5\nI0508 11:10:36.660576 783 log.go:172] (0xc00016c000) (5) Data frame handling\nI0508 11:10:36.660593 783 log.go:172] (0xc00016c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:10:36.690313 783 log.go:172] (0xc0000e6370) Data frame received for 3\nI0508 11:10:36.690337 783 log.go:172] (0xc0006a5220) (3) Data frame handling\nI0508 11:10:36.690347 783 log.go:172] (0xc0006a5220) (3) Data frame sent\nI0508 11:10:36.690354 783 log.go:172] (0xc0000e6370) Data frame received for 3\nI0508 11:10:36.690359 783 log.go:172] (0xc0006a5220) (3) Data frame handling\nI0508 11:10:36.690791 783 log.go:172] (0xc0000e6370) Data frame received for 5\nI0508 11:10:36.690823 783 log.go:172] (0xc00016c000) (5) Data frame handling\nI0508 11:10:36.692464 783 log.go:172] (0xc0000e6370) Data frame received for 1\nI0508 11:10:36.692498 783 log.go:172] (0xc0003e8000) (1) Data frame handling\nI0508 11:10:36.692510 783 log.go:172] (0xc0003e8000) (1) Data frame sent\nI0508 11:10:36.692527 783 log.go:172] (0xc0000e6370) (0xc0003e8000) Stream removed, broadcasting: 1\nI0508 11:10:36.692549 783 log.go:172] (0xc0000e6370) Go away received\nI0508 11:10:36.692923 783 log.go:172] (0xc0000e6370) (0xc0003e8000) Stream removed, broadcasting: 1\nI0508 11:10:36.692950 783 log.go:172] (0xc0000e6370) (0xc0006a5220) Stream removed, broadcasting: 3\nI0508 11:10:36.692969 783 log.go:172] (0xc0000e6370) (0xc00016c000) Stream removed, broadcasting: 5\n" May 8 11:10:36.697: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 11:10:36.697: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 11:10:36.701: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 8 11:10:46.705: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 11:10:46.705: INFO: Waiting for statefulset status.replicas updated to 0 May 8 11:10:46.736: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999344s May 8 11:10:47.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.978658682s May 8 11:10:48.767: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.973827407s May 8 11:10:49.772: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.948122462s May 8 11:10:50.779: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.943355643s May 8 11:10:51.784: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.93620029s May 8 11:10:52.789: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.931275677s May 8 11:10:53.794: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.925965977s May 8 11:10:54.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.9208723s May 8 11:10:55.803: INFO: Verifying statefulset ss doesn't scale past 1 for another 916.55107ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7775 May 8 11:10:56.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:10:57.044: INFO: stderr: "I0508 11:10:56.936360 817 log.go:172] (0xc000aaa000) (0xc0009e8000) Create stream\nI0508 11:10:56.936452 817 log.go:172] (0xc000aaa000) (0xc0009e8000) Stream added, broadcasting: 1\nI0508 11:10:56.941571 817 log.go:172] (0xc000aaa000) Reply frame received for 1\nI0508 11:10:56.941626 817 log.go:172] (0xc000aaa000) (0xc0009e80a0) Create stream\nI0508 11:10:56.941647 817 log.go:172] (0xc000aaa000) (0xc0009e80a0) Stream added, broadcasting: 3\nI0508 11:10:56.942739 817 log.go:172] (0xc000aaa000) Reply frame received for 3\nI0508 11:10:56.942770 817 log.go:172] (0xc000aaa000) (0xc000abe000) Create stream\nI0508 11:10:56.942785 817 log.go:172] (0xc000aaa000) (0xc000abe000) Stream added, broadcasting: 5\nI0508 11:10:56.943906 817 log.go:172] (0xc000aaa000) Reply frame received for 5\nI0508 11:10:57.037524 817 log.go:172] (0xc000aaa000) Data frame received for 3\nI0508 11:10:57.037562 817 log.go:172] (0xc0009e80a0) (3) Data frame handling\nI0508 11:10:57.037583 817 log.go:172] (0xc0009e80a0) (3) Data frame sent\nI0508 11:10:57.037598 817 log.go:172] (0xc000aaa000) Data frame received for 3\nI0508 11:10:57.037608 817 log.go:172] (0xc0009e80a0) (3) Data frame handling\nI0508 11:10:57.037713 817 log.go:172] (0xc000aaa000) Data frame received for 5\nI0508 11:10:57.037745 817 log.go:172] (0xc000abe000) (5) Data frame handling\nI0508 11:10:57.037767 817 log.go:172] (0xc000abe000) (5) Data frame sent\nI0508 11:10:57.037786 817 log.go:172] (0xc000aaa000) Data frame received for 5\nI0508 11:10:57.037804 817 log.go:172] (0xc000abe000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 11:10:57.039292 817 log.go:172] (0xc000aaa000) Data frame received for 1\nI0508 11:10:57.039331 817 log.go:172] (0xc0009e8000) (1) Data frame handling\nI0508 11:10:57.039356 817 log.go:172] (0xc0009e8000) (1) Data frame sent\nI0508 11:10:57.039385 817 log.go:172] (0xc000aaa000) (0xc0009e8000) Stream removed, broadcasting: 1\nI0508 11:10:57.039414 817 log.go:172] (0xc000aaa000) Go away received\nI0508 11:10:57.039839 817 log.go:172] (0xc000aaa000) (0xc0009e8000) Stream removed, broadcasting: 1\nI0508 11:10:57.039865 817 log.go:172] (0xc000aaa000) (0xc0009e80a0) Stream removed, broadcasting: 3\nI0508 11:10:57.039878 817 log.go:172] (0xc000aaa000) (0xc000abe000) Stream removed, broadcasting: 5\n" May 8 11:10:57.045: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 11:10:57.045: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 11:10:57.049: INFO: Found 1 stateful pods, waiting for 3 May 8 11:11:07.058: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 8 11:11:07.059: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 8 11:11:07.059: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 8 11:11:07.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 11:11:07.299: INFO: stderr: "I0508 11:11:07.227977 837 log.go:172] (0xc0000e6d10) (0xc0009540a0) Create stream\nI0508 11:11:07.228045 837 log.go:172] (0xc0000e6d10) (0xc0009540a0) Stream added, broadcasting: 1\nI0508 11:11:07.230738 837 log.go:172] (0xc0000e6d10) Reply frame received for 1\nI0508 11:11:07.230783 837 log.go:172] (0xc0000e6d10) (0xc0006e3220) Create stream\nI0508 11:11:07.230797 837 log.go:172] (0xc0000e6d10) (0xc0006e3220) Stream added, broadcasting: 3\nI0508 11:11:07.231829 837 log.go:172] (0xc0000e6d10) Reply frame received for 3\nI0508 11:11:07.231869 837 log.go:172] (0xc0000e6d10) (0xc000954140) Create stream\nI0508 11:11:07.231883 837 log.go:172] (0xc0000e6d10) (0xc000954140) Stream added, broadcasting: 5\nI0508 11:11:07.232702 837 log.go:172] (0xc0000e6d10) Reply frame received for 5\nI0508 11:11:07.292384 837 log.go:172] (0xc0000e6d10) Data frame received for 5\nI0508 11:11:07.292419 837 log.go:172] (0xc000954140) (5) Data frame handling\nI0508 11:11:07.292430 837 log.go:172] (0xc000954140) (5) Data frame sent\nI0508 11:11:07.292463 837 log.go:172] (0xc0000e6d10) Data frame received for 5\nI0508 11:11:07.292475 837 log.go:172] (0xc000954140) (5) Data frame handling\nI0508 11:11:07.292486 837 log.go:172] (0xc0000e6d10) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:11:07.292497 837 log.go:172] (0xc0006e3220) (3) Data frame handling\nI0508 11:11:07.292539 837 log.go:172] (0xc0006e3220) (3) Data frame sent\nI0508 11:11:07.292566 837 log.go:172] (0xc0000e6d10) Data frame received for 3\nI0508 11:11:07.292575 837 log.go:172] (0xc0006e3220) (3) Data frame handling\nI0508 11:11:07.294850 837 log.go:172] (0xc0000e6d10) Data frame received for 1\nI0508 11:11:07.294873 837 log.go:172] (0xc0009540a0) (1) Data frame handling\nI0508 11:11:07.294886 837 log.go:172] (0xc0009540a0) (1) Data frame sent\nI0508 11:11:07.294899 837 log.go:172] (0xc0000e6d10) (0xc0009540a0) Stream removed, broadcasting: 1\nI0508 11:11:07.294916 837 log.go:172] (0xc0000e6d10) Go away received\nI0508 11:11:07.295188 837 log.go:172] (0xc0000e6d10) (0xc0009540a0) Stream removed, broadcasting: 1\nI0508 11:11:07.295200 837 log.go:172] (0xc0000e6d10) (0xc0006e3220) Stream removed, broadcasting: 3\nI0508 11:11:07.295205 837 log.go:172] (0xc0000e6d10) (0xc000954140) Stream removed, broadcasting: 5\n" May 8 11:11:07.299: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 11:11:07.299: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 11:11:07.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 11:11:07.693: INFO: stderr: "I0508 11:11:07.434545 857 log.go:172] (0xc000b90fd0) (0xc000b88500) Create stream\nI0508 11:11:07.434605 857 log.go:172] (0xc000b90fd0) (0xc000b88500) Stream added, broadcasting: 1\nI0508 11:11:07.437645 857 log.go:172] (0xc000b90fd0) Reply frame received for 1\nI0508 11:11:07.437679 857 log.go:172] (0xc000b90fd0) (0xc000ade0a0) Create stream\nI0508 11:11:07.437687 857 log.go:172] (0xc000b90fd0) (0xc000ade0a0) Stream added, broadcasting: 3\nI0508 11:11:07.438538 857 log.go:172] (0xc000b90fd0) Reply frame received for 3\nI0508 11:11:07.438585 857 log.go:172] (0xc000b90fd0) (0xc000b885a0) Create stream\nI0508 11:11:07.438607 857 log.go:172] (0xc000b90fd0) (0xc000b885a0) Stream added, broadcasting: 5\nI0508 11:11:07.439449 857 log.go:172] (0xc000b90fd0) Reply frame received for 5\nI0508 11:11:07.497763 857 log.go:172] (0xc000b90fd0) Data frame received for 5\nI0508 11:11:07.497787 857 log.go:172] (0xc000b885a0) (5) Data frame handling\nI0508 11:11:07.497804 857 log.go:172] (0xc000b885a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:11:07.684552 857 log.go:172] (0xc000b90fd0) Data frame received for 3\nI0508 11:11:07.684606 857 log.go:172] (0xc000ade0a0) (3) Data frame handling\nI0508 11:11:07.684631 857 log.go:172] (0xc000ade0a0) (3) Data frame sent\nI0508 11:11:07.684645 857 log.go:172] (0xc000b90fd0) Data frame received for 3\nI0508 11:11:07.684661 857 log.go:172] (0xc000ade0a0) (3) Data frame handling\nI0508 11:11:07.684806 857 log.go:172] (0xc000b90fd0) Data frame received for 5\nI0508 11:11:07.684848 857 log.go:172] (0xc000b885a0) (5) Data frame handling\nI0508 11:11:07.687328 857 log.go:172] (0xc000b90fd0) Data frame received for 1\nI0508 11:11:07.687362 857 log.go:172] (0xc000b88500) (1) Data frame handling\nI0508 11:11:07.687403 857 log.go:172] (0xc000b88500) (1) Data frame sent\nI0508 11:11:07.687438 857 log.go:172] (0xc000b90fd0) (0xc000b88500) Stream removed, broadcasting: 1\nI0508 11:11:07.687496 857 log.go:172] (0xc000b90fd0) Go away received\nI0508 11:11:07.687963 857 log.go:172] (0xc000b90fd0) (0xc000b88500) Stream removed, broadcasting: 1\nI0508 11:11:07.687986 857 log.go:172] (0xc000b90fd0) (0xc000ade0a0) Stream removed, broadcasting: 3\nI0508 11:11:07.687998 857 log.go:172] (0xc000b90fd0) (0xc000b885a0) Stream removed, broadcasting: 5\n" May 8 11:11:07.693: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 11:11:07.693: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 11:11:07.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 8 11:11:07.938: INFO: stderr: "I0508 11:11:07.807572 877 log.go:172] (0xc0005a8000) (0xc000978000) Create stream\nI0508 11:11:07.807607 877 log.go:172] (0xc0005a8000) (0xc000978000) Stream added, broadcasting: 1\nI0508 11:11:07.809954 877 log.go:172] (0xc0005a8000) Reply frame received for 1\nI0508 11:11:07.809992 877 log.go:172] (0xc0005a8000) (0xc000536b40) Create stream\nI0508 11:11:07.810003 877 log.go:172] (0xc0005a8000) (0xc000536b40) Stream added, broadcasting: 3\nI0508 11:11:07.810902 877 log.go:172] (0xc0005a8000) Reply frame received for 3\nI0508 11:11:07.810938 877 log.go:172] (0xc0005a8000) (0xc0009780a0) Create stream\nI0508 11:11:07.810951 877 log.go:172] (0xc0005a8000) (0xc0009780a0) Stream added, broadcasting: 5\nI0508 11:11:07.811971 877 log.go:172] (0xc0005a8000) Reply frame received for 5\nI0508 11:11:07.874735 877 log.go:172] (0xc0005a8000) Data frame received for 5\nI0508 11:11:07.874765 877 log.go:172] (0xc0009780a0) (5) Data frame handling\nI0508 11:11:07.874785 877 log.go:172] (0xc0009780a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:11:07.930145 877 log.go:172] (0xc0005a8000) Data frame received for 3\nI0508 11:11:07.930192 877 log.go:172] (0xc000536b40) (3) Data frame handling\nI0508 11:11:07.930222 877 log.go:172] (0xc000536b40) (3) Data frame sent\nI0508 11:11:07.930316 877 log.go:172] (0xc0005a8000) Data frame received for 3\nI0508 11:11:07.930348 877 log.go:172] (0xc000536b40) (3) Data frame handling\nI0508 11:11:07.930368 877 log.go:172] (0xc0005a8000) Data frame received for 5\nI0508 11:11:07.930379 877 log.go:172] (0xc0009780a0) (5) Data frame handling\nI0508 11:11:07.932483 877 log.go:172] (0xc0005a8000) Data frame received for 1\nI0508 11:11:07.932508 877 log.go:172] (0xc000978000) (1) Data frame handling\nI0508 11:11:07.932524 877 log.go:172] (0xc000978000) (1) Data frame sent\nI0508 11:11:07.932542 877 log.go:172] (0xc0005a8000) (0xc000978000) Stream removed, broadcasting: 1\nI0508 11:11:07.932776 877 log.go:172] (0xc0005a8000) Go away received\nI0508 11:11:07.932957 877 log.go:172] (0xc0005a8000) (0xc000978000) Stream removed, broadcasting: 1\nI0508 11:11:07.932977 877 log.go:172] (0xc0005a8000) (0xc000536b40) Stream removed, broadcasting: 3\nI0508 11:11:07.932990 877 log.go:172] (0xc0005a8000) (0xc0009780a0) Stream removed, broadcasting: 5\n" May 8 11:11:07.938: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 8 11:11:07.938: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 8 11:11:07.938: INFO: Waiting for statefulset status.replicas updated to 0 May 8 11:11:07.941: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 8 11:11:17.956: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 8 11:11:17.956: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 8 11:11:17.956: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 8 11:11:18.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998963s May 8 11:11:19.011: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992726143s May 8 11:11:20.014: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984927697s May 8 11:11:21.020: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981452323s May 8 11:11:22.027: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975909266s May 8 11:11:23.032: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.968917177s May 8 11:11:24.038: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963848956s May 8 11:11:25.043: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958004247s May 8 11:11:26.049: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.952646652s May 8 11:11:27.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 946.424159ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7775 May 8 11:11:28.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:11:28.280: INFO: stderr: "I0508 11:11:28.206086 897 log.go:172] (0xc0005a82c0) (0xc000adc140) Create stream\nI0508 11:11:28.206150 897 log.go:172] (0xc0005a82c0) (0xc000adc140) Stream added, broadcasting: 1\nI0508 11:11:28.208726 897 log.go:172] (0xc0005a82c0) Reply frame received for 1\nI0508 11:11:28.208757 897 log.go:172] (0xc0005a82c0) (0xc0006a92c0) Create stream\nI0508 11:11:28.208766 897 log.go:172] (0xc0005a82c0) (0xc0006a92c0) Stream added, broadcasting: 3\nI0508 11:11:28.210289 897 log.go:172] (0xc0005a82c0) Reply frame received for 3\nI0508 11:11:28.210358 897 log.go:172] (0xc0005a82c0) (0xc000adc1e0) Create stream\nI0508 11:11:28.210375 897 log.go:172] (0xc0005a82c0) (0xc000adc1e0) Stream added, broadcasting: 5\nI0508 11:11:28.211179 897 log.go:172] (0xc0005a82c0) Reply frame received for 5\nI0508 11:11:28.273440 897 log.go:172] (0xc0005a82c0) Data frame received for 3\nI0508 11:11:28.273473 897 log.go:172] (0xc0006a92c0) (3) Data frame handling\nI0508 11:11:28.273497 897 log.go:172] (0xc0006a92c0) (3) Data frame sent\nI0508 11:11:28.273936 897 log.go:172] (0xc0005a82c0) Data frame received for 3\nI0508 11:11:28.273959 897 log.go:172] (0xc0006a92c0) (3) Data frame handling\nI0508 11:11:28.274237 897 log.go:172] (0xc0005a82c0) Data frame received for 5\nI0508 11:11:28.274252 897 log.go:172] (0xc000adc1e0) (5) Data frame handling\nI0508 11:11:28.274264 897 log.go:172] (0xc000adc1e0) (5) Data frame sent\nI0508 11:11:28.274273 897 log.go:172] (0xc0005a82c0) Data frame received for 5\nI0508 11:11:28.274279 897 log.go:172] (0xc000adc1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 11:11:28.275968 897 log.go:172] (0xc0005a82c0) Data frame received for 1\nI0508 11:11:28.276165 897 log.go:172] (0xc000adc140) (1) Data frame handling\nI0508 11:11:28.276246 897 log.go:172] (0xc000adc140) (1) Data frame sent\nI0508 11:11:28.276267 897 log.go:172] (0xc0005a82c0) (0xc000adc140) Stream removed, broadcasting: 1\nI0508 11:11:28.276281 897 log.go:172] (0xc0005a82c0) Go away received\nI0508 11:11:28.276567 897 log.go:172] (0xc0005a82c0) (0xc000adc140) Stream removed, broadcasting: 1\nI0508 11:11:28.276581 897 log.go:172] (0xc0005a82c0) (0xc0006a92c0) Stream removed, broadcasting: 3\nI0508 11:11:28.276592 897 log.go:172] (0xc0005a82c0) (0xc000adc1e0) Stream removed, broadcasting: 5\n" May 8 11:11:28.280: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 11:11:28.280: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 11:11:28.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:11:28.540: INFO: stderr: "I0508 11:11:28.458933 917 log.go:172] (0xc000abfa20) (0xc000946aa0) Create stream\nI0508 11:11:28.458995 917 log.go:172] (0xc000abfa20) (0xc000946aa0) Stream added, broadcasting: 1\nI0508 11:11:28.461408 917 log.go:172] (0xc000abfa20) Reply frame received for 1\nI0508 11:11:28.461443 917 log.go:172] (0xc000abfa20) (0xc0009ba5a0) Create stream\nI0508 11:11:28.461453 917 log.go:172] (0xc000abfa20) (0xc0009ba5a0) Stream added, broadcasting: 3\nI0508 11:11:28.462545 917 log.go:172] (0xc000abfa20) Reply frame received for 3\nI0508 11:11:28.462571 917 log.go:172] (0xc000abfa20) (0xc0009ba640) Create stream\nI0508 11:11:28.462581 917 log.go:172] (0xc000abfa20) (0xc0009ba640) Stream added, broadcasting: 5\nI0508 11:11:28.463558 917 log.go:172] (0xc000abfa20) Reply frame received for 5\nI0508 11:11:28.533696 917 log.go:172] (0xc000abfa20) Data frame received for 5\nI0508 11:11:28.533724 917 log.go:172] (0xc0009ba640) (5) Data frame handling\nI0508 11:11:28.533739 917 log.go:172] (0xc0009ba640) (5) Data frame sent\nI0508 11:11:28.533757 917 log.go:172] (0xc000abfa20) Data frame received for 5\nI0508 11:11:28.533770 917 log.go:172] (0xc0009ba640) (5) Data frame handling\nI0508 11:11:28.533788 917 log.go:172] (0xc000abfa20) Data frame received for 3\nI0508 11:11:28.533798 917 log.go:172] (0xc0009ba5a0) (3) Data frame handling\nI0508 11:11:28.533820 917 log.go:172] (0xc0009ba5a0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 11:11:28.533880 917 log.go:172] (0xc000abfa20) Data frame received for 3\nI0508 11:11:28.533911 917 log.go:172] (0xc0009ba5a0) (3) Data frame handling\nI0508 11:11:28.535278 917 log.go:172] (0xc000abfa20) Data frame received for 1\nI0508 11:11:28.535299 917 log.go:172] (0xc000946aa0) (1) Data frame handling\nI0508 11:11:28.535317 917 log.go:172] (0xc000946aa0) (1) Data frame sent\nI0508 11:11:28.535337 917 log.go:172] (0xc000abfa20) (0xc000946aa0) Stream removed, broadcasting: 1\nI0508 11:11:28.535357 917 log.go:172] (0xc000abfa20) Go away received\nI0508 11:11:28.535795 917 log.go:172] (0xc000abfa20) (0xc000946aa0) Stream removed, broadcasting: 1\nI0508 11:11:28.535821 917 log.go:172] (0xc000abfa20) (0xc0009ba5a0) Stream removed, broadcasting: 3\nI0508 11:11:28.535834 917 log.go:172] (0xc000abfa20) (0xc0009ba640) Stream removed, broadcasting: 5\n" May 8 11:11:28.540: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 8 11:11:28.540: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 8 11:11:28.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:11:29.135: INFO: rc: 1 May 8 11:11:29.135: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: I0508 11:11:28.744187 935 log.go:172] (0xc00003a6e0) (0xc00063f720) Create stream I0508 11:11:28.744261 935 log.go:172] (0xc00003a6e0) (0xc00063f720) Stream added, broadcasting: 1 I0508 11:11:28.746788 935 log.go:172] (0xc00003a6e0) Reply frame received for 1 I0508 11:11:28.746822 935 log.go:172] (0xc00003a6e0) (0xc000930000) Create stream I0508 11:11:28.746832 935 log.go:172] (0xc00003a6e0) (0xc000930000) Stream added, broadcasting: 3 I0508 11:11:28.747891 935 log.go:172] (0xc00003a6e0) Reply frame received for 3 I0508 11:11:28.747939 935 log.go:172] (0xc00003a6e0) (0xc00081b360) Create stream I0508 11:11:28.747961 935 log.go:172] (0xc00003a6e0) (0xc00081b360) Stream added, broadcasting: 5 I0508 11:11:28.748875 935 log.go:172] (0xc00003a6e0) Reply frame received for 5 I0508 11:11:29.129800 935 log.go:172] (0xc00003a6e0) (0xc000930000) Stream removed, broadcasting: 3 I0508 11:11:29.129937 935 log.go:172] (0xc00003a6e0) Data frame received for 1 I0508 11:11:29.129971 935 log.go:172] (0xc00063f720) (1) Data frame handling I0508 11:11:29.129985 935 log.go:172] (0xc00063f720) (1) Data frame sent I0508 11:11:29.130041 935 log.go:172] (0xc00003a6e0) (0xc00063f720) Stream removed, broadcasting: 1 I0508 11:11:29.130412 935 log.go:172] (0xc00003a6e0) (0xc00081b360) Stream removed, broadcasting: 5 I0508 11:11:29.130443 935 log.go:172] (0xc00003a6e0) (0xc00063f720) Stream removed, broadcasting: 1 I0508 11:11:29.130455 935 log.go:172] (0xc00003a6e0) (0xc000930000) Stream removed, broadcasting: 3 I0508 11:11:29.130468 935 log.go:172] (0xc00003a6e0) (0xc00081b360) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "1f3a0cdb8b3754595632b61f22cca945b0ff9809ac46846ab5edd0bc15b33a1c": task a4cc807badac259deeb6858cd5f6f4c61cb5bcada0e3194aac356fc51d66c2d4 not found: not found error: exit status 1 May 8 11:11:39.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:11:39.236: INFO: rc: 1 May 8 11:11:39.236: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:11:49.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:11:49.345: INFO: rc: 1 May 8 11:11:49.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:11:59.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:11:59.448: INFO: rc: 1 May 8 11:11:59.448: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:12:09.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:12:09.555: INFO: rc: 1 May 8 11:12:09.555: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:12:19.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:12:19.694: INFO: rc: 1 May 8 11:12:19.694: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:12:29.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:12:29.807: INFO: rc: 1 May 8 11:12:29.807: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:12:39.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:12:39.931: INFO: rc: 1 May 8 11:12:39.931: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:12:49.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:12:50.037: INFO: rc: 1 May 8 11:12:50.037: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:13:00.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:13:00.135: INFO: rc: 1 May 8 11:13:00.135: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:13:10.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:13:10.226: INFO: rc: 1 May 8 11:13:10.226: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:13:20.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:13:20.327: INFO: rc: 1 May 8 11:13:20.327: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:13:30.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:13:30.421: INFO: rc: 1 May 8 11:13:30.421: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:13:40.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:13:40.538: INFO: rc: 1 May 8 11:13:40.538: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:13:50.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:13:50.649: INFO: rc: 1 May 8 11:13:50.649: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:14:00.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:14:00.747: INFO: rc: 1 May 8 11:14:00.747: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:14:10.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:14:10.844: INFO: rc: 1 May 8 11:14:10.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:14:20.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:14:20.947: INFO: rc: 1 May 8 11:14:20.947: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:14:30.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:14:31.034: INFO: rc: 1 May 8 11:14:31.034: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:14:41.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:14:41.132: INFO: rc: 1 May 8 11:14:41.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:14:51.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:14:51.488: INFO: rc: 1 May 8 11:14:51.488: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:15:01.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:15:01.588: INFO: rc: 1 May 8 11:15:01.588: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:15:11.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:15:11.687: INFO: rc: 1 May 8 11:15:11.687: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:15:21.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:15:21.778: INFO: rc: 1 May 8 11:15:21.778: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:15:31.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:15:31.876: INFO: rc: 1 May 8 11:15:31.876: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:15:41.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:15:41.974: INFO: rc: 1 May 8 11:15:41.974: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:15:51.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:15:52.067: INFO: rc: 1 May 8 11:15:52.067: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:16:02.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:16:02.169: INFO: rc: 1 May 8 11:16:02.169: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:16:12.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:16:12.275: INFO: rc: 1 May 8 11:16:12.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:16:22.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:16:22.372: INFO: rc: 1 May 8 11:16:22.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 8 11:16:32.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7775 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 8 11:16:32.471: INFO: rc: 1 May 8 11:16:32.471: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: May 8 11:16:32.471: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 8 11:16:32.480: INFO: Deleting all statefulset in ns statefulset-7775 May 8 11:16:32.483: INFO: Scaling statefulset ss to 0 May 8 11:16:32.491: INFO: Waiting for statefulset status.replicas updated to 0 May 8 11:16:32.493: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:16:32.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7775" for this suite. • [SLOW TEST:369.387 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":80,"skipped":1412,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:16:32.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-1f5f4800-c9b2-4e92-b673-2eb6e564d9e8 STEP: Creating a pod to test consume secrets May 8 11:16:32.627: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e" in namespace "projected-8369" to be "Succeeded or Failed" May 8 11:16:32.682: INFO: Pod "pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 55.727387ms May 8 11:16:34.687: INFO: Pod "pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060238809s May 8 11:16:36.691: INFO: Pod "pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e": Phase="Running", Reason="", readiness=true. Elapsed: 4.064017277s May 8 11:16:38.695: INFO: Pod "pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068298385s STEP: Saw pod success May 8 11:16:38.695: INFO: Pod "pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e" satisfied condition "Succeeded or Failed" May 8 11:16:38.721: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e container projected-secret-volume-test: STEP: delete the pod May 8 11:16:38.773: INFO: Waiting for pod pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e to disappear May 8 11:16:38.802: INFO: Pod pod-projected-secrets-121c0ff8-27fd-4fb2-ae4b-ac6c7b267d5e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:16:38.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8369" for this suite. • [SLOW TEST:6.296 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:16:38.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-c16f11e2-f7a8-4ca6-9bdf-6d555fb92fed STEP: Creating a pod to test consume secrets May 8 11:16:38.987: INFO: Waiting up to 5m0s for pod "pod-secrets-a4da3aa1-7ba8-4a76-9cae-16dbaad15e83" in namespace "secrets-2026" to be "Succeeded or Failed" May 8 11:16:38.994: INFO: Pod "pod-secrets-a4da3aa1-7ba8-4a76-9cae-16dbaad15e83": Phase="Pending", Reason="", readiness=false. Elapsed: 7.456441ms May 8 11:16:40.998: INFO: Pod "pod-secrets-a4da3aa1-7ba8-4a76-9cae-16dbaad15e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011200267s May 8 11:16:43.002: INFO: Pod "pod-secrets-a4da3aa1-7ba8-4a76-9cae-16dbaad15e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015249708s STEP: Saw pod success May 8 11:16:43.002: INFO: Pod "pod-secrets-a4da3aa1-7ba8-4a76-9cae-16dbaad15e83" satisfied condition "Succeeded or Failed" May 8 11:16:43.006: INFO: Trying to get logs from node kali-worker pod pod-secrets-a4da3aa1-7ba8-4a76-9cae-16dbaad15e83 container secret-volume-test: STEP: delete the pod May 8 11:16:43.025: INFO: Waiting for pod pod-secrets-a4da3aa1-7ba8-4a76-9cae-16dbaad15e83 to disappear May 8 11:16:43.030: INFO: Pod pod-secrets-a4da3aa1-7ba8-4a76-9cae-16dbaad15e83 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:16:43.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2026" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:16:43.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:16:59.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5096" for this suite. • [SLOW TEST:16.119 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":83,"skipped":1512,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:16:59.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:17:12.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3919" for this suite. • [SLOW TEST:13.422 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":84,"skipped":1520,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:17:12.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 8 11:17:12.854: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 11:17:12.873: INFO: Waiting for terminating namespaces to be deleted... May 8 11:17:12.875: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 8 11:17:12.880: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 8 11:17:12.880: INFO: Container kindnet-cni ready: true, restart count 1 May 8 11:17:12.880: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 8 11:17:12.880: INFO: Container kube-proxy ready: true, restart count 0 May 8 11:17:12.880: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 8 11:17:12.898: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 8 11:17:12.898: INFO: Container kindnet-cni ready: true, restart count 0 May 8 11:17:12.898: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 8 11:17:12.898: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160d0972facdb575], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:17:13.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-845" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":85,"skipped":1524,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:17:13.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8991.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8991.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 11:17:20.288: INFO: DNS probes using dns-test-02212be2-5022-4038-85a2-d36e0b702628 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8991.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8991.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 11:17:28.850: INFO: File wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local from pod dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 11:17:28.854: INFO: File jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local from pod dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 11:17:28.854: INFO: Lookups using dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 failed for: [wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local] May 8 11:17:33.859: INFO: File wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local from pod dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 11:17:33.863: INFO: File jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local from pod dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 11:17:33.863: INFO: Lookups using dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 failed for: [wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local] May 8 11:17:38.860: INFO: File wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local from pod dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 11:17:38.864: INFO: File jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local from pod dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 11:17:38.864: INFO: Lookups using dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 failed for: [wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local] May 8 11:17:43.859: INFO: File wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local from pod dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 11:17:43.863: INFO: File jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local from pod dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 contains 'foo.example.com. ' instead of 'bar.example.com.' May 8 11:17:43.863: INFO: Lookups using dns-8991/dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 failed for: [wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local] May 8 11:17:48.863: INFO: DNS probes using dns-test-a9b401d1-aa39-4bd9-b4f4-b6574a00bee9 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8991.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8991.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8991.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8991.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 11:17:55.523: INFO: DNS probes using dns-test-084d6cde-9b9f-4b02-90f8-6d3dd8704ab5 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:17:55.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8991" for this suite. • [SLOW TEST:41.654 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":86,"skipped":1527,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:17:55.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:18:01.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9868" for this suite. • [SLOW TEST:5.574 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":87,"skipped":1548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:18:01.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 8 11:18:01.651: INFO: Waiting up to 5m0s for pod "pod-c0dca231-f367-46ed-b8ad-a59f2286112f" in namespace "emptydir-9151" to be "Succeeded or Failed" May 8 11:18:01.663: INFO: Pod "pod-c0dca231-f367-46ed-b8ad-a59f2286112f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.815672ms May 8 11:18:03.667: INFO: Pod "pod-c0dca231-f367-46ed-b8ad-a59f2286112f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016355424s May 8 11:18:05.671: INFO: Pod "pod-c0dca231-f367-46ed-b8ad-a59f2286112f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020549073s May 8 11:18:07.675: INFO: Pod "pod-c0dca231-f367-46ed-b8ad-a59f2286112f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02399506s STEP: Saw pod success May 8 11:18:07.675: INFO: Pod "pod-c0dca231-f367-46ed-b8ad-a59f2286112f" satisfied condition "Succeeded or Failed" May 8 11:18:07.677: INFO: Trying to get logs from node kali-worker pod pod-c0dca231-f367-46ed-b8ad-a59f2286112f container test-container: STEP: delete the pod May 8 11:18:07.712: INFO: Waiting for pod pod-c0dca231-f367-46ed-b8ad-a59f2286112f to disappear May 8 11:18:07.722: INFO: Pod pod-c0dca231-f367-46ed-b8ad-a59f2286112f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:18:07.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9151" for this suite. • [SLOW TEST:6.520 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1590,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:18:07.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7978.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7978.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7978.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7978.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7978.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7978.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 8 11:18:15.923: INFO: DNS probes using dns-7978/dns-test-11722d98-534d-4a1d-b9d4-7805ff5f188a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:18:15.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7978" for this suite. • [SLOW TEST:8.357 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":89,"skipped":1604,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:18:16.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-95fab450-ae65-4981-b48f-a505d1a0b98a STEP: Creating a pod to test consume configMaps May 8 11:18:16.423: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf" in namespace "configmap-6926" to be "Succeeded or Failed" May 8 11:18:16.462: INFO: Pod "pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.606715ms May 8 11:18:18.555: INFO: Pod "pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131983379s May 8 11:18:20.560: INFO: Pod "pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136707285s May 8 11:18:22.564: INFO: Pod "pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140607263s STEP: Saw pod success May 8 11:18:22.564: INFO: Pod "pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf" satisfied condition "Succeeded or Failed" May 8 11:18:22.567: INFO: Trying to get logs from node kali-worker pod pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf container configmap-volume-test: STEP: delete the pod May 8 11:18:22.598: INFO: Waiting for pod pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf to disappear May 8 11:18:22.609: INFO: Pod pod-configmaps-1d7dd77f-e985-4da0-a106-fd19a476adaf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:18:22.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6926" for this suite. • [SLOW TEST:6.529 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:18:22.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-872 STEP: creating replication controller nodeport-test in namespace services-872 I0508 11:18:22.942407 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-872, replica count: 2 I0508 11:18:25.992845 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0508 11:18:28.993386 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 8 11:18:28.993: INFO: Creating new exec pod May 8 11:18:34.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-872 execpodv48c8 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 8 11:18:34.222: INFO: stderr: "I0508 11:18:34.135744 1561 log.go:172] (0xc0009ae6e0) (0xc0009c03c0) Create stream\nI0508 11:18:34.135828 1561 log.go:172] (0xc0009ae6e0) (0xc0009c03c0) Stream added, broadcasting: 1\nI0508 11:18:34.141460 1561 log.go:172] (0xc0009ae6e0) Reply frame received for 1\nI0508 11:18:34.141492 1561 log.go:172] (0xc0009ae6e0) (0xc00067b680) Create stream\nI0508 11:18:34.141500 1561 log.go:172] (0xc0009ae6e0) (0xc00067b680) Stream added, broadcasting: 3\nI0508 11:18:34.142254 1561 log.go:172] (0xc0009ae6e0) Reply frame received for 3\nI0508 11:18:34.142295 1561 log.go:172] (0xc0009ae6e0) (0xc000474aa0) Create stream\nI0508 11:18:34.142306 1561 log.go:172] (0xc0009ae6e0) (0xc000474aa0) Stream added, broadcasting: 5\nI0508 11:18:34.143100 1561 log.go:172] (0xc0009ae6e0) Reply frame received for 5\nI0508 11:18:34.215666 1561 log.go:172] (0xc0009ae6e0) Data frame received for 5\nI0508 11:18:34.215689 1561 log.go:172] (0xc000474aa0) (5) Data frame handling\nI0508 11:18:34.215702 1561 log.go:172] (0xc000474aa0) (5) Data frame sent\nI0508 11:18:34.215707 1561 log.go:172] (0xc0009ae6e0) Data frame received for 5\nI0508 11:18:34.215712 1561 log.go:172] (0xc000474aa0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0508 11:18:34.215726 1561 log.go:172] (0xc000474aa0) (5) Data frame sent\nI0508 11:18:34.215881 1561 log.go:172] (0xc0009ae6e0) Data frame received for 5\nI0508 11:18:34.215896 1561 log.go:172] (0xc000474aa0) (5) Data frame handling\nI0508 11:18:34.216045 1561 log.go:172] (0xc0009ae6e0) Data frame received for 3\nI0508 11:18:34.216060 1561 log.go:172] (0xc00067b680) (3) Data frame handling\nI0508 11:18:34.218025 1561 log.go:172] (0xc0009ae6e0) Data frame received for 1\nI0508 11:18:34.218044 1561 log.go:172] (0xc0009c03c0) (1) Data frame handling\nI0508 11:18:34.218060 1561 log.go:172] (0xc0009c03c0) (1) Data frame sent\nI0508 11:18:34.218073 1561 log.go:172] (0xc0009ae6e0) (0xc0009c03c0) Stream removed, broadcasting: 1\nI0508 11:18:34.218112 1561 log.go:172] (0xc0009ae6e0) Go away received\nI0508 11:18:34.218386 1561 log.go:172] (0xc0009ae6e0) (0xc0009c03c0) Stream removed, broadcasting: 1\nI0508 11:18:34.218406 1561 log.go:172] (0xc0009ae6e0) (0xc00067b680) Stream removed, broadcasting: 3\nI0508 11:18:34.218416 1561 log.go:172] (0xc0009ae6e0) (0xc000474aa0) Stream removed, broadcasting: 5\n" May 8 11:18:34.222: INFO: stdout: "" May 8 11:18:34.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-872 execpodv48c8 -- /bin/sh -x -c nc -zv -t -w 2 10.97.222.196 80' May 8 11:18:34.438: INFO: stderr: "I0508 11:18:34.352206 1581 log.go:172] (0xc0009288f0) (0xc000916320) Create stream\nI0508 11:18:34.352278 1581 log.go:172] (0xc0009288f0) (0xc000916320) Stream added, broadcasting: 1\nI0508 11:18:34.354913 1581 log.go:172] (0xc0009288f0) Reply frame received for 1\nI0508 11:18:34.354962 1581 log.go:172] (0xc0009288f0) (0xc0005bf680) Create stream\nI0508 11:18:34.354978 1581 log.go:172] (0xc0009288f0) (0xc0005bf680) Stream added, broadcasting: 3\nI0508 11:18:34.356071 1581 log.go:172] (0xc0009288f0) Reply frame received for 3\nI0508 11:18:34.356105 1581 log.go:172] (0xc0009288f0) (0xc000916500) Create stream\nI0508 11:18:34.356118 1581 log.go:172] (0xc0009288f0) (0xc000916500) Stream added, broadcasting: 5\nI0508 11:18:34.357352 1581 log.go:172] (0xc0009288f0) Reply frame received for 5\nI0508 11:18:34.432949 1581 log.go:172] (0xc0009288f0) Data frame received for 3\nI0508 11:18:34.432974 1581 log.go:172] (0xc0005bf680) (3) Data frame handling\nI0508 11:18:34.433034 1581 log.go:172] (0xc0009288f0) Data frame received for 5\nI0508 11:18:34.433075 1581 log.go:172] (0xc000916500) (5) Data frame handling\nI0508 11:18:34.433100 1581 log.go:172] (0xc000916500) (5) Data frame sent\n+ nc -zv -t -w 2 10.97.222.196 80\nConnection to 10.97.222.196 80 port [tcp/http] succeeded!\nI0508 11:18:34.433282 1581 log.go:172] (0xc0009288f0) Data frame received for 5\nI0508 11:18:34.433299 1581 log.go:172] (0xc000916500) (5) Data frame handling\nI0508 11:18:34.434411 1581 log.go:172] (0xc0009288f0) Data frame received for 1\nI0508 11:18:34.434423 1581 log.go:172] (0xc000916320) (1) Data frame handling\nI0508 11:18:34.434435 1581 log.go:172] (0xc000916320) (1) Data frame sent\nI0508 11:18:34.434445 1581 log.go:172] (0xc0009288f0) (0xc000916320) Stream removed, broadcasting: 1\nI0508 11:18:34.434716 1581 log.go:172] (0xc0009288f0) Go away received\nI0508 11:18:34.434733 1581 log.go:172] (0xc0009288f0) (0xc000916320) Stream removed, broadcasting: 1\nI0508 11:18:34.434747 1581 log.go:172] (0xc0009288f0) (0xc0005bf680) Stream removed, broadcasting: 3\nI0508 11:18:34.434755 1581 log.go:172] (0xc0009288f0) (0xc000916500) Stream removed, broadcasting: 5\n" May 8 11:18:34.439: INFO: stdout: "" May 8 11:18:34.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-872 execpodv48c8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 31560' May 8 11:18:34.648: INFO: stderr: "I0508 11:18:34.581930 1603 log.go:172] (0xc000585ad0) (0xc00090c0a0) Create stream\nI0508 11:18:34.581998 1603 log.go:172] (0xc000585ad0) (0xc00090c0a0) Stream added, broadcasting: 1\nI0508 11:18:34.585027 1603 log.go:172] (0xc000585ad0) Reply frame received for 1\nI0508 11:18:34.585060 1603 log.go:172] (0xc000585ad0) (0xc00091c000) Create stream\nI0508 11:18:34.585070 1603 log.go:172] (0xc000585ad0) (0xc00091c000) Stream added, broadcasting: 3\nI0508 11:18:34.586216 1603 log.go:172] (0xc000585ad0) Reply frame received for 3\nI0508 11:18:34.586253 1603 log.go:172] (0xc000585ad0) (0xc00090c140) Create stream\nI0508 11:18:34.586270 1603 log.go:172] (0xc000585ad0) (0xc00090c140) Stream added, broadcasting: 5\nI0508 11:18:34.587267 1603 log.go:172] (0xc000585ad0) Reply frame received for 5\nI0508 11:18:34.639926 1603 log.go:172] (0xc000585ad0) Data frame received for 5\nI0508 11:18:34.639983 1603 log.go:172] (0xc00090c140) (5) Data frame handling\nI0508 11:18:34.640000 1603 log.go:172] (0xc00090c140) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.15 31560\nConnection to 172.17.0.15 31560 port [tcp/31560] succeeded!\nI0508 11:18:34.640016 1603 log.go:172] (0xc000585ad0) Data frame received for 3\nI0508 11:18:34.640054 1603 log.go:172] (0xc00091c000) (3) Data frame handling\nI0508 11:18:34.640435 1603 log.go:172] (0xc000585ad0) Data frame received for 5\nI0508 11:18:34.640463 1603 log.go:172] (0xc00090c140) (5) Data frame handling\nI0508 11:18:34.642372 1603 log.go:172] (0xc000585ad0) Data frame received for 1\nI0508 11:18:34.642397 1603 log.go:172] (0xc00090c0a0) (1) Data frame handling\nI0508 11:18:34.642421 1603 log.go:172] (0xc00090c0a0) (1) Data frame sent\nI0508 11:18:34.642523 1603 log.go:172] (0xc000585ad0) (0xc00090c0a0) Stream removed, broadcasting: 1\nI0508 11:18:34.642596 1603 log.go:172] (0xc000585ad0) Go away received\nI0508 11:18:34.642966 1603 log.go:172] (0xc000585ad0) (0xc00090c0a0) Stream removed, broadcasting: 1\nI0508 11:18:34.642985 1603 log.go:172] (0xc000585ad0) (0xc00091c000) Stream removed, broadcasting: 3\nI0508 11:18:34.642996 1603 log.go:172] (0xc000585ad0) (0xc00090c140) Stream removed, broadcasting: 5\n" May 8 11:18:34.648: INFO: stdout: "" May 8 11:18:34.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-872 execpodv48c8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31560' May 8 11:18:34.933: INFO: stderr: "I0508 11:18:34.796930 1625 log.go:172] (0xc000902840) (0xc0008cc1e0) Create stream\nI0508 11:18:34.797014 1625 log.go:172] (0xc000902840) (0xc0008cc1e0) Stream added, broadcasting: 1\nI0508 11:18:34.799591 1625 log.go:172] (0xc000902840) Reply frame received for 1\nI0508 11:18:34.799645 1625 log.go:172] (0xc000902840) (0xc00062b720) Create stream\nI0508 11:18:34.799654 1625 log.go:172] (0xc000902840) (0xc00062b720) Stream added, broadcasting: 3\nI0508 11:18:34.800430 1625 log.go:172] (0xc000902840) Reply frame received for 3\nI0508 11:18:34.800459 1625 log.go:172] (0xc000902840) (0xc0008cc280) Create stream\nI0508 11:18:34.800466 1625 log.go:172] (0xc000902840) (0xc0008cc280) Stream added, broadcasting: 5\nI0508 11:18:34.801335 1625 log.go:172] (0xc000902840) Reply frame received for 5\nI0508 11:18:34.927497 1625 log.go:172] (0xc000902840) Data frame received for 3\nI0508 11:18:34.927525 1625 log.go:172] (0xc00062b720) (3) Data frame handling\nI0508 11:18:34.927560 1625 log.go:172] (0xc000902840) Data frame received for 5\nI0508 11:18:34.927584 1625 log.go:172] (0xc0008cc280) (5) Data frame handling\nI0508 11:18:34.927605 1625 log.go:172] (0xc0008cc280) (5) Data frame sent\nI0508 11:18:34.927624 1625 log.go:172] (0xc000902840) Data frame received for 5\nI0508 11:18:34.927636 1625 log.go:172] (0xc0008cc280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31560\nConnection to 172.17.0.18 31560 port [tcp/31560] succeeded!\nI0508 11:18:34.928632 1625 log.go:172] (0xc000902840) Data frame received for 1\nI0508 11:18:34.928667 1625 log.go:172] (0xc0008cc1e0) (1) Data frame handling\nI0508 11:18:34.928686 1625 log.go:172] (0xc0008cc1e0) (1) Data frame sent\nI0508 11:18:34.928711 1625 log.go:172] (0xc000902840) (0xc0008cc1e0) Stream removed, broadcasting: 1\nI0508 11:18:34.928726 1625 log.go:172] (0xc000902840) Go away received\nI0508 11:18:34.929265 1625 log.go:172] (0xc000902840) (0xc0008cc1e0) Stream removed, broadcasting: 1\nI0508 11:18:34.929291 1625 log.go:172] (0xc000902840) (0xc00062b720) Stream removed, broadcasting: 3\nI0508 11:18:34.929302 1625 log.go:172] (0xc000902840) (0xc0008cc280) Stream removed, broadcasting: 5\n" May 8 11:18:34.933: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:18:34.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-872" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.325 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":91,"skipped":1649,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:18:34.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments May 8 11:18:35.078: INFO: Waiting up to 5m0s for pod "client-containers-aa0b6681-6142-443b-9d4c-4c7f653ba32a" in namespace "containers-4067" to be "Succeeded or Failed" May 8 11:18:35.089: INFO: Pod "client-containers-aa0b6681-6142-443b-9d4c-4c7f653ba32a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.886438ms May 8 11:18:37.092: INFO: Pod "client-containers-aa0b6681-6142-443b-9d4c-4c7f653ba32a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014661481s May 8 11:18:39.097: INFO: Pod "client-containers-aa0b6681-6142-443b-9d4c-4c7f653ba32a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019410441s STEP: Saw pod success May 8 11:18:39.097: INFO: Pod "client-containers-aa0b6681-6142-443b-9d4c-4c7f653ba32a" satisfied condition "Succeeded or Failed" May 8 11:18:39.101: INFO: Trying to get logs from node kali-worker2 pod client-containers-aa0b6681-6142-443b-9d4c-4c7f653ba32a container test-container: STEP: delete the pod May 8 11:18:39.135: INFO: Waiting for pod client-containers-aa0b6681-6142-443b-9d4c-4c7f653ba32a to disappear May 8 11:18:39.149: INFO: Pod client-containers-aa0b6681-6142-443b-9d4c-4c7f653ba32a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:18:39.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4067" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1656,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:18:39.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:18:47.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6219" for this suite. • [SLOW TEST:8.312 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1664,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:18:47.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 8 11:18:56.018: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:18:56.024: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:18:58.024: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:18:58.030: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:19:00.024: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:19:00.052: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:19:02.024: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:19:02.029: INFO: Pod pod-with-prestop-exec-hook still exists May 8 11:19:04.024: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 8 11:19:04.028: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:19:04.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3279" for this suite. • [SLOW TEST:16.638 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1670,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:19:04.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 8 11:19:13.102: INFO: Successfully updated pod "adopt-release-98ldd" STEP: Checking that the Job readopts the Pod May 8 11:19:13.102: INFO: Waiting up to 15m0s for pod "adopt-release-98ldd" in namespace "job-1953" to be "adopted" May 8 11:19:13.125: INFO: Pod "adopt-release-98ldd": Phase="Running", Reason="", readiness=true. Elapsed: 23.409989ms May 8 11:19:15.129: INFO: Pod "adopt-release-98ldd": Phase="Running", Reason="", readiness=true. Elapsed: 2.027639009s May 8 11:19:15.130: INFO: Pod "adopt-release-98ldd" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 8 11:19:15.638: INFO: Successfully updated pod "adopt-release-98ldd" STEP: Checking that the Job releases the Pod May 8 11:19:15.638: INFO: Waiting up to 15m0s for pod "adopt-release-98ldd" in namespace "job-1953" to be "released" May 8 11:19:15.663: INFO: Pod "adopt-release-98ldd": Phase="Running", Reason="", readiness=true. Elapsed: 24.415917ms May 8 11:19:17.727: INFO: Pod "adopt-release-98ldd": Phase="Running", Reason="", readiness=true. Elapsed: 2.088433445s May 8 11:19:17.727: INFO: Pod "adopt-release-98ldd" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:19:17.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1953" for this suite. • [SLOW TEST:13.828 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":95,"skipped":1671,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:19:17.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 8 11:19:18.731: INFO: >>> kubeConfig: /root/.kube/config May 8 11:19:21.715: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:19:32.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6710" for this suite. • [SLOW TEST:14.522 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":96,"skipped":1681,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:19:32.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-c36b3f94-7485-4e9d-a5d0-f0ba9426e2af STEP: Creating a pod to test consume configMaps May 8 11:19:32.625: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c28579d-ee71-48fa-ae1a-31d28f00f4a1" in namespace "projected-4357" to be "Succeeded or Failed" May 8 11:19:32.674: INFO: Pod "pod-projected-configmaps-5c28579d-ee71-48fa-ae1a-31d28f00f4a1": Phase="Pending", Reason="", readiness=false. Elapsed: 49.14905ms May 8 11:19:34.890: INFO: Pod "pod-projected-configmaps-5c28579d-ee71-48fa-ae1a-31d28f00f4a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264787901s May 8 11:19:36.893: INFO: Pod "pod-projected-configmaps-5c28579d-ee71-48fa-ae1a-31d28f00f4a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.26791844s STEP: Saw pod success May 8 11:19:36.893: INFO: Pod "pod-projected-configmaps-5c28579d-ee71-48fa-ae1a-31d28f00f4a1" satisfied condition "Succeeded or Failed" May 8 11:19:36.896: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-5c28579d-ee71-48fa-ae1a-31d28f00f4a1 container projected-configmap-volume-test: STEP: delete the pod May 8 11:19:36.950: INFO: Waiting for pod pod-projected-configmaps-5c28579d-ee71-48fa-ae1a-31d28f00f4a1 to disappear May 8 11:19:36.958: INFO: Pod pod-projected-configmaps-5c28579d-ee71-48fa-ae1a-31d28f00f4a1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:19:36.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4357" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1685,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:19:36.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:19:37.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5676" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":98,"skipped":1701,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:19:37.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 8 11:19:38.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1289' May 8 11:19:38.354: INFO: stderr: "" May 8 11:19:38.354: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 8 11:19:43.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1289 -o json' May 8 11:19:43.516: INFO: stderr: "" May 8 11:19:43.516: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-08T11:19:38Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-08T11:19:38Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.221\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-08T11:19:41Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1289\",\n \"resourceVersion\": \"2568605\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1289/pods/e2e-test-httpd-pod\",\n \"uid\": \"9e3215dd-2b03-4175-85e9-fafdc4c33e4f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hfjmw\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hfjmw\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hfjmw\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T11:19:38Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T11:19:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T11:19:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-08T11:19:38Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://ab1d8bf860412c8dc43e74894d61efea3964658f25e2057dc6f6e395ea27863e\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-08T11:19:41Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.18\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.221\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.221\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-08T11:19:38Z\"\n }\n}\n" STEP: replace the image in the pod May 8 11:19:43.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1289' May 8 11:19:43.848: INFO: stderr: "" May 8 11:19:43.848: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 8 11:19:43.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1289' May 8 11:19:47.724: INFO: stderr: "" May 8 11:19:47.724: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:19:47.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1289" for this suite. • [SLOW TEST:9.749 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":99,"skipped":1701,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:19:47.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-5876 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5876 to expose endpoints map[] May 8 11:19:47.880: INFO: Get endpoints failed (7.393301ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 8 11:19:48.896: INFO: successfully validated that service endpoint-test2 in namespace services-5876 exposes endpoints map[] (1.022851547s elapsed) STEP: Creating pod pod1 in namespace services-5876 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5876 to expose endpoints map[pod1:[80]] May 8 11:19:53.799: INFO: successfully validated that service endpoint-test2 in namespace services-5876 exposes endpoints map[pod1:[80]] (4.826104024s elapsed) STEP: Creating pod pod2 in namespace services-5876 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5876 to expose endpoints map[pod1:[80] pod2:[80]] May 8 11:19:58.863: INFO: Unexpected endpoints: found map[e4db8973-b111-4390-8084-6a892bdb4d65:[80]], expected map[pod1:[80] pod2:[80]] (4.666198585s elapsed, will retry) May 8 11:19:59.873: INFO: successfully validated that service endpoint-test2 in namespace services-5876 exposes endpoints map[pod1:[80] pod2:[80]] (5.676834278s elapsed) STEP: Deleting pod pod1 in namespace services-5876 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5876 to expose endpoints map[pod2:[80]] May 8 11:20:00.996: INFO: successfully validated that service endpoint-test2 in namespace services-5876 exposes endpoints map[pod2:[80]] (1.118517749s elapsed) STEP: Deleting pod pod2 in namespace services-5876 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5876 to expose endpoints map[] May 8 11:20:01.032: INFO: successfully validated that service endpoint-test2 in namespace services-5876 exposes endpoints map[] (30.231246ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:20:01.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5876" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:13.333 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":100,"skipped":1715,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:20:01.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:20:17.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3574" for this suite. • [SLOW TEST:16.773 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":101,"skipped":1715,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:20:17.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 11:20:17.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version' May 8 11:20:18.103: INFO: stderr: "" May 8 11:20:18.104: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:20Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 8 11:20:18.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3072" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":102,"skipped":1724,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 8 11:20:18.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 8 11:20:18.165: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May  8 11:20:18.353: INFO: namespace kubectl-3669
May  8 11:20:18.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3669'
May  8 11:20:18.601: INFO: stderr: ""
May  8 11:20:18.601: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May  8 11:20:19.605: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:20:19.606: INFO: Found 0 / 1
May  8 11:20:20.606: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:20:20.606: INFO: Found 0 / 1
May  8 11:20:21.606: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:20:21.606: INFO: Found 0 / 1
May  8 11:20:22.606: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:20:22.606: INFO: Found 1 / 1
May  8 11:20:22.606: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May  8 11:20:22.610: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:20:22.610: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May  8 11:20:22.610: INFO: wait on agnhost-master startup in kubectl-3669 
May  8 11:20:22.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-ddnlr agnhost-master --namespace=kubectl-3669'
May  8 11:20:22.721: INFO: stderr: ""
May  8 11:20:22.721: INFO: stdout: "Paused\n"
STEP: exposing RC
May  8 11:20:22.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3669'
May  8 11:20:22.918: INFO: stderr: ""
May  8 11:20:22.918: INFO: stdout: "service/rm2 exposed\n"
May  8 11:20:22.924: INFO: Service rm2 in namespace kubectl-3669 found.
STEP: exposing service
May  8 11:20:24.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3669'
May  8 11:20:25.098: INFO: stderr: ""
May  8 11:20:25.099: INFO: stdout: "service/rm3 exposed\n"
May  8 11:20:25.145: INFO: Service rm3 in namespace kubectl-3669 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:20:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3669" for this suite.

• [SLOW TEST:8.887 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":104,"skipped":1738,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:20:27.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
May  8 11:20:27.269: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
May  8 11:20:27.290: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May  8 11:20:27.290: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
May  8 11:20:27.359: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May  8 11:20:27.359: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
May  8 11:20:27.416: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
May  8 11:20:27.416: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
May  8 11:20:35.200: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:20:35.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-8060" for this suite.

• [SLOW TEST:8.121 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":105,"skipped":1755,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:20:35.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:20:35.364: INFO: Creating ReplicaSet my-hostname-basic-a777e1f7-d875-4f9f-a5a7-569b0432fbd8
May  8 11:20:35.423: INFO: Pod name my-hostname-basic-a777e1f7-d875-4f9f-a5a7-569b0432fbd8: Found 0 pods out of 1
May  8 11:20:40.441: INFO: Pod name my-hostname-basic-a777e1f7-d875-4f9f-a5a7-569b0432fbd8: Found 1 pods out of 1
May  8 11:20:40.441: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a777e1f7-d875-4f9f-a5a7-569b0432fbd8" is running
May  8 11:20:40.459: INFO: Pod "my-hostname-basic-a777e1f7-d875-4f9f-a5a7-569b0432fbd8-7cb4b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:20:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:20:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:20:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:20:35 +0000 UTC Reason: Message:}])
May  8 11:20:40.459: INFO: Trying to dial the pod
May  8 11:20:45.470: INFO: Controller my-hostname-basic-a777e1f7-d875-4f9f-a5a7-569b0432fbd8: Got expected result from replica 1 [my-hostname-basic-a777e1f7-d875-4f9f-a5a7-569b0432fbd8-7cb4b]: "my-hostname-basic-a777e1f7-d875-4f9f-a5a7-569b0432fbd8-7cb4b", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:20:45.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-471" for this suite.

• [SLOW TEST:10.197 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":106,"skipped":1768,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:20:45.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-77ad90c4-6198-405e-8cde-35a12c0d2394
STEP: Creating a pod to test consume configMaps
May  8 11:20:45.618: INFO: Waiting up to 5m0s for pod "pod-configmaps-f26bcb0e-c0a4-42e1-9aca-282eb48fd4ff" in namespace "configmap-5287" to be "Succeeded or Failed"
May  8 11:20:45.628: INFO: Pod "pod-configmaps-f26bcb0e-c0a4-42e1-9aca-282eb48fd4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.254849ms
May  8 11:20:47.633: INFO: Pod "pod-configmaps-f26bcb0e-c0a4-42e1-9aca-282eb48fd4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014772463s
May  8 11:20:49.638: INFO: Pod "pod-configmaps-f26bcb0e-c0a4-42e1-9aca-282eb48fd4ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019437115s
STEP: Saw pod success
May  8 11:20:49.638: INFO: Pod "pod-configmaps-f26bcb0e-c0a4-42e1-9aca-282eb48fd4ff" satisfied condition "Succeeded or Failed"
May  8 11:20:49.641: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-f26bcb0e-c0a4-42e1-9aca-282eb48fd4ff container configmap-volume-test: 
STEP: delete the pod
May  8 11:20:49.669: INFO: Waiting for pod pod-configmaps-f26bcb0e-c0a4-42e1-9aca-282eb48fd4ff to disappear
May  8 11:20:49.673: INFO: Pod pod-configmaps-f26bcb0e-c0a4-42e1-9aca-282eb48fd4ff no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:20:49.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5287" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1831,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:20:49.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:21:00.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8608" for this suite.

• [SLOW TEST:11.174 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":108,"skipped":1838,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:21:00.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:21:00.971: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e65651b-db2d-4ab5-a79a-2d11794ac052" in namespace "downward-api-2232" to be "Succeeded or Failed"
May  8 11:21:01.040: INFO: Pod "downwardapi-volume-6e65651b-db2d-4ab5-a79a-2d11794ac052": Phase="Pending", Reason="", readiness=false. Elapsed: 69.516933ms
May  8 11:21:03.045: INFO: Pod "downwardapi-volume-6e65651b-db2d-4ab5-a79a-2d11794ac052": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074384589s
May  8 11:21:05.049: INFO: Pod "downwardapi-volume-6e65651b-db2d-4ab5-a79a-2d11794ac052": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078281045s
STEP: Saw pod success
May  8 11:21:05.049: INFO: Pod "downwardapi-volume-6e65651b-db2d-4ab5-a79a-2d11794ac052" satisfied condition "Succeeded or Failed"
May  8 11:21:05.052: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6e65651b-db2d-4ab5-a79a-2d11794ac052 container client-container: 
STEP: delete the pod
May  8 11:21:05.144: INFO: Waiting for pod downwardapi-volume-6e65651b-db2d-4ab5-a79a-2d11794ac052 to disappear
May  8 11:21:05.153: INFO: Pod downwardapi-volume-6e65651b-db2d-4ab5-a79a-2d11794ac052 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:21:05.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2232" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1843,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:21:05.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:21:05.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7720" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":110,"skipped":1852,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:21:05.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May  8 11:21:05.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-841'
May  8 11:21:09.870: INFO: stderr: ""
May  8 11:21:09.870: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  8 11:21:09.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-841'
May  8 11:21:10.022: INFO: stderr: ""
May  8 11:21:10.023: INFO: stdout: "update-demo-nautilus-h22bq update-demo-nautilus-nb9zc "
May  8 11:21:10.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:10.127: INFO: stderr: ""
May  8 11:21:10.127: INFO: stdout: ""
May  8 11:21:10.127: INFO: update-demo-nautilus-h22bq is created but not running
May  8 11:21:15.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-841'
May  8 11:21:15.230: INFO: stderr: ""
May  8 11:21:15.230: INFO: stdout: "update-demo-nautilus-h22bq update-demo-nautilus-nb9zc "
May  8 11:21:15.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:15.325: INFO: stderr: ""
May  8 11:21:15.325: INFO: stdout: "true"
May  8 11:21:15.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:15.411: INFO: stderr: ""
May  8 11:21:15.412: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  8 11:21:15.412: INFO: validating pod update-demo-nautilus-h22bq
May  8 11:21:15.415: INFO: got data: {
  "image": "nautilus.jpg"
}

May  8 11:21:15.415: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  8 11:21:15.415: INFO: update-demo-nautilus-h22bq is verified up and running
May  8 11:21:15.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nb9zc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:15.508: INFO: stderr: ""
May  8 11:21:15.508: INFO: stdout: "true"
May  8 11:21:15.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nb9zc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:15.597: INFO: stderr: ""
May  8 11:21:15.597: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  8 11:21:15.597: INFO: validating pod update-demo-nautilus-nb9zc
May  8 11:21:15.601: INFO: got data: {
  "image": "nautilus.jpg"
}

May  8 11:21:15.601: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  8 11:21:15.601: INFO: update-demo-nautilus-nb9zc is verified up and running
STEP: scaling down the replication controller
May  8 11:21:15.603: INFO: scanned /root for discovery docs: 
May  8 11:21:15.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-841'
May  8 11:21:16.709: INFO: stderr: ""
May  8 11:21:16.709: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  8 11:21:16.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-841'
May  8 11:21:16.948: INFO: stderr: ""
May  8 11:21:16.948: INFO: stdout: "update-demo-nautilus-h22bq update-demo-nautilus-nb9zc "
STEP: Replicas for name=update-demo: expected=1 actual=2
May  8 11:21:21.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-841'
May  8 11:21:22.044: INFO: stderr: ""
May  8 11:21:22.044: INFO: stdout: "update-demo-nautilus-h22bq "
May  8 11:21:22.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:22.135: INFO: stderr: ""
May  8 11:21:22.135: INFO: stdout: "true"
May  8 11:21:22.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:22.224: INFO: stderr: ""
May  8 11:21:22.224: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  8 11:21:22.224: INFO: validating pod update-demo-nautilus-h22bq
May  8 11:21:22.227: INFO: got data: {
  "image": "nautilus.jpg"
}

May  8 11:21:22.227: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  8 11:21:22.227: INFO: update-demo-nautilus-h22bq is verified up and running
STEP: scaling up the replication controller
May  8 11:21:22.229: INFO: scanned /root for discovery docs: 
May  8 11:21:22.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-841'
May  8 11:21:23.376: INFO: stderr: ""
May  8 11:21:23.376: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  8 11:21:23.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-841'
May  8 11:21:23.478: INFO: stderr: ""
May  8 11:21:23.478: INFO: stdout: "update-demo-nautilus-h22bq update-demo-nautilus-shjf9 "
May  8 11:21:23.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:23.556: INFO: stderr: ""
May  8 11:21:23.556: INFO: stdout: "true"
May  8 11:21:23.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:23.660: INFO: stderr: ""
May  8 11:21:23.660: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  8 11:21:23.660: INFO: validating pod update-demo-nautilus-h22bq
May  8 11:21:23.664: INFO: got data: {
  "image": "nautilus.jpg"
}

May  8 11:21:23.664: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  8 11:21:23.664: INFO: update-demo-nautilus-h22bq is verified up and running
May  8 11:21:23.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shjf9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:23.791: INFO: stderr: ""
May  8 11:21:23.791: INFO: stdout: ""
May  8 11:21:23.791: INFO: update-demo-nautilus-shjf9 is created but not running
May  8 11:21:28.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-841'
May  8 11:21:28.906: INFO: stderr: ""
May  8 11:21:28.906: INFO: stdout: "update-demo-nautilus-h22bq update-demo-nautilus-shjf9 "
May  8 11:21:28.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:29.015: INFO: stderr: ""
May  8 11:21:29.015: INFO: stdout: "true"
May  8 11:21:29.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h22bq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:29.106: INFO: stderr: ""
May  8 11:21:29.106: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  8 11:21:29.106: INFO: validating pod update-demo-nautilus-h22bq
May  8 11:21:29.109: INFO: got data: {
  "image": "nautilus.jpg"
}

May  8 11:21:29.110: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  8 11:21:29.110: INFO: update-demo-nautilus-h22bq is verified up and running
May  8 11:21:29.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shjf9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:29.193: INFO: stderr: ""
May  8 11:21:29.193: INFO: stdout: "true"
May  8 11:21:29.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shjf9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-841'
May  8 11:21:29.293: INFO: stderr: ""
May  8 11:21:29.293: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  8 11:21:29.293: INFO: validating pod update-demo-nautilus-shjf9
May  8 11:21:29.299: INFO: got data: {
  "image": "nautilus.jpg"
}

May  8 11:21:29.299: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  8 11:21:29.299: INFO: update-demo-nautilus-shjf9 is verified up and running
STEP: using delete to clean up resources
May  8 11:21:29.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-841'
May  8 11:21:29.411: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  8 11:21:29.411: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May  8 11:21:29.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-841'
May  8 11:21:29.527: INFO: stderr: "No resources found in kubectl-841 namespace.\n"
May  8 11:21:29.527: INFO: stdout: ""
May  8 11:21:29.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-841 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  8 11:21:29.634: INFO: stderr: ""
May  8 11:21:29.634: INFO: stdout: "update-demo-nautilus-h22bq\nupdate-demo-nautilus-shjf9\n"
May  8 11:21:30.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-841'
May  8 11:21:30.239: INFO: stderr: "No resources found in kubectl-841 namespace.\n"
May  8 11:21:30.239: INFO: stdout: ""
May  8 11:21:30.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-841 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  8 11:21:30.466: INFO: stderr: ""
May  8 11:21:30.466: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:21:30.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-841" for this suite.

• [SLOW TEST:25.186 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":111,"skipped":1859,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:21:30.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:22:01.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8902" for this suite.

• [SLOW TEST:30.993 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1871,"failed":0}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:22:01.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  8 11:22:01.529: INFO: Waiting up to 5m0s for pod "downward-api-1159a4a0-126c-4204-98b7-b05fed818194" in namespace "downward-api-6137" to be "Succeeded or Failed"
May  8 11:22:01.532: INFO: Pod "downward-api-1159a4a0-126c-4204-98b7-b05fed818194": Phase="Pending", Reason="", readiness=false. Elapsed: 2.953605ms
May  8 11:22:03.536: INFO: Pod "downward-api-1159a4a0-126c-4204-98b7-b05fed818194": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007041065s
May  8 11:22:05.541: INFO: Pod "downward-api-1159a4a0-126c-4204-98b7-b05fed818194": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011746486s
STEP: Saw pod success
May  8 11:22:05.541: INFO: Pod "downward-api-1159a4a0-126c-4204-98b7-b05fed818194" satisfied condition "Succeeded or Failed"
May  8 11:22:05.544: INFO: Trying to get logs from node kali-worker2 pod downward-api-1159a4a0-126c-4204-98b7-b05fed818194 container dapi-container: 
STEP: delete the pod
May  8 11:22:05.570: INFO: Waiting for pod downward-api-1159a4a0-126c-4204-98b7-b05fed818194 to disappear
May  8 11:22:05.587: INFO: Pod downward-api-1159a4a0-126c-4204-98b7-b05fed818194 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:22:05.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6137" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1876,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:22:05.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:22:05.673: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:22:06.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1300" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":114,"skipped":1909,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:22:06.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
May  8 11:22:06.363: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

May  8 11:22:06.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5273'
May  8 11:22:06.702: INFO: stderr: ""
May  8 11:22:06.702: INFO: stdout: "service/agnhost-slave created\n"
May  8 11:22:06.702: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

May  8 11:22:06.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5273'
May  8 11:22:06.997: INFO: stderr: ""
May  8 11:22:06.997: INFO: stdout: "service/agnhost-master created\n"
May  8 11:22:06.998: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

May  8 11:22:06.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5273'
May  8 11:22:07.516: INFO: stderr: ""
May  8 11:22:07.516: INFO: stdout: "service/frontend created\n"
May  8 11:22:07.516: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

May  8 11:22:07.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5273'
May  8 11:22:07.752: INFO: stderr: ""
May  8 11:22:07.752: INFO: stdout: "deployment.apps/frontend created\n"
May  8 11:22:07.753: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May  8 11:22:07.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5273'
May  8 11:22:08.137: INFO: stderr: ""
May  8 11:22:08.137: INFO: stdout: "deployment.apps/agnhost-master created\n"
May  8 11:22:08.137: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May  8 11:22:08.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5273'
May  8 11:22:08.458: INFO: stderr: ""
May  8 11:22:08.458: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
May  8 11:22:08.458: INFO: Waiting for all frontend pods to be Running.
May  8 11:22:18.509: INFO: Waiting for frontend to serve content.
May  8 11:22:18.520: INFO: Trying to add a new entry to the guestbook.
May  8 11:22:18.531: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
May  8 11:22:18.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5273'
May  8 11:22:18.728: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  8 11:22:18.728: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
May  8 11:22:18.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5273'
May  8 11:22:18.917: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  8 11:22:18.917: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May  8 11:22:18.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5273'
May  8 11:22:19.054: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  8 11:22:19.054: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May  8 11:22:19.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5273'
May  8 11:22:19.149: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  8 11:22:19.149: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May  8 11:22:19.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5273'
May  8 11:22:19.276: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  8 11:22:19.276: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May  8 11:22:19.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5273'
May  8 11:22:19.753: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  8 11:22:19.753: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:22:19.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5273" for this suite.

• [SLOW TEST:13.511 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":115,"skipped":1929,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:22:19.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:22:20.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  8 11:22:23.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7178 create -f -'
May  8 11:22:27.075: INFO: stderr: ""
May  8 11:22:27.075: INFO: stdout: "e2e-test-crd-publish-openapi-9673-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May  8 11:22:27.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7178 delete e2e-test-crd-publish-openapi-9673-crds test-cr'
May  8 11:22:27.195: INFO: stderr: ""
May  8 11:22:27.195: INFO: stdout: "e2e-test-crd-publish-openapi-9673-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
May  8 11:22:27.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7178 apply -f -'
May  8 11:22:27.560: INFO: stderr: ""
May  8 11:22:27.560: INFO: stdout: "e2e-test-crd-publish-openapi-9673-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May  8 11:22:27.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7178 delete e2e-test-crd-publish-openapi-9673-crds test-cr'
May  8 11:22:27.696: INFO: stderr: ""
May  8 11:22:27.696: INFO: stdout: "e2e-test-crd-publish-openapi-9673-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May  8 11:22:27.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9673-crds'
May  8 11:22:28.778: INFO: stderr: ""
May  8 11:22:28.778: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9673-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:22:30.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7178" for this suite.

• [SLOW TEST:10.922 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":116,"skipped":1936,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:22:30.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 11:22:31.623: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 11:22:33.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533751, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533751, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533751, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724533751, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:22:36.745: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:22:36.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4581" for this suite.
STEP: Destroying namespace "webhook-4581-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.311 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":117,"skipped":1957,"failed":0}
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:22:37.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:22:37.198: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-8520
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-8520
STEP: creating replication controller externalsvc in namespace services-8520
I0508 11:22:38.178219       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8520, replica count: 2
I0508 11:22:41.228663       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 11:22:44.228889       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
May  8 11:22:44.592: INFO: Creating new exec pod
May  8 11:22:48.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8520 execpod9b8g2 -- /bin/sh -x -c nslookup nodeport-service'
May  8 11:22:48.967: INFO: stderr: "I0508 11:22:48.863610    2775 log.go:172] (0xc0009c0b00) (0xc0006af680) Create stream\nI0508 11:22:48.863667    2775 log.go:172] (0xc0009c0b00) (0xc0006af680) Stream added, broadcasting: 1\nI0508 11:22:48.866112    2775 log.go:172] (0xc0009c0b00) Reply frame received for 1\nI0508 11:22:48.866153    2775 log.go:172] (0xc0009c0b00) (0xc0006af720) Create stream\nI0508 11:22:48.866165    2775 log.go:172] (0xc0009c0b00) (0xc0006af720) Stream added, broadcasting: 3\nI0508 11:22:48.867223    2775 log.go:172] (0xc0009c0b00) Reply frame received for 3\nI0508 11:22:48.867256    2775 log.go:172] (0xc0009c0b00) (0xc0006af7c0) Create stream\nI0508 11:22:48.867266    2775 log.go:172] (0xc0009c0b00) (0xc0006af7c0) Stream added, broadcasting: 5\nI0508 11:22:48.868267    2775 log.go:172] (0xc0009c0b00) Reply frame received for 5\nI0508 11:22:48.954312    2775 log.go:172] (0xc0009c0b00) Data frame received for 5\nI0508 11:22:48.954332    2775 log.go:172] (0xc0006af7c0) (5) Data frame handling\nI0508 11:22:48.954343    2775 log.go:172] (0xc0006af7c0) (5) Data frame sent\n+ nslookup nodeport-service\nI0508 11:22:48.960094    2775 log.go:172] (0xc0009c0b00) Data frame received for 3\nI0508 11:22:48.960123    2775 log.go:172] (0xc0006af720) (3) Data frame handling\nI0508 11:22:48.960142    2775 log.go:172] (0xc0006af720) (3) Data frame sent\nI0508 11:22:48.960963    2775 log.go:172] (0xc0009c0b00) Data frame received for 3\nI0508 11:22:48.960990    2775 log.go:172] (0xc0006af720) (3) Data frame handling\nI0508 11:22:48.961013    2775 log.go:172] (0xc0006af720) (3) Data frame sent\nI0508 11:22:48.961560    2775 log.go:172] (0xc0009c0b00) Data frame received for 3\nI0508 11:22:48.961588    2775 log.go:172] (0xc0006af720) (3) Data frame handling\nI0508 11:22:48.961611    2775 log.go:172] (0xc0009c0b00) Data frame received for 5\nI0508 11:22:48.961625    2775 log.go:172] (0xc0006af7c0) (5) Data frame handling\nI0508 11:22:48.962999    2775 log.go:172] (0xc0009c0b00) Data frame received for 1\nI0508 11:22:48.963019    2775 log.go:172] (0xc0006af680) (1) Data frame handling\nI0508 11:22:48.963033    2775 log.go:172] (0xc0006af680) (1) Data frame sent\nI0508 11:22:48.963051    2775 log.go:172] (0xc0009c0b00) (0xc0006af680) Stream removed, broadcasting: 1\nI0508 11:22:48.963081    2775 log.go:172] (0xc0009c0b00) Go away received\nI0508 11:22:48.963360    2775 log.go:172] (0xc0009c0b00) (0xc0006af680) Stream removed, broadcasting: 1\nI0508 11:22:48.963382    2775 log.go:172] (0xc0009c0b00) (0xc0006af720) Stream removed, broadcasting: 3\nI0508 11:22:48.963397    2775 log.go:172] (0xc0009c0b00) (0xc0006af7c0) Stream removed, broadcasting: 5\n"
May  8 11:22:48.967: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8520.svc.cluster.local\tcanonical name = externalsvc.services-8520.svc.cluster.local.\nName:\texternalsvc.services-8520.svc.cluster.local\nAddress: 10.110.128.93\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-8520, will wait for the garbage collector to delete the pods
May  8 11:22:49.037: INFO: Deleting ReplicationController externalsvc took: 6.492409ms
May  8 11:22:49.338: INFO: Terminating ReplicationController externalsvc pods took: 300.252506ms
May  8 11:22:54.777: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:22:54.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8520" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:17.214 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":119,"skipped":1969,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:22:54.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May  8 11:23:03.378: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  8 11:23:03.394: INFO: Pod pod-with-prestop-http-hook still exists
May  8 11:23:05.394: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  8 11:23:05.407: INFO: Pod pod-with-prestop-http-hook still exists
May  8 11:23:07.394: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  8 11:23:07.398: INFO: Pod pod-with-prestop-http-hook still exists
May  8 11:23:09.394: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  8 11:23:09.398: INFO: Pod pod-with-prestop-http-hook still exists
May  8 11:23:11.394: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  8 11:23:11.399: INFO: Pod pod-with-prestop-http-hook still exists
May  8 11:23:13.394: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  8 11:23:13.414: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:23:13.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5435" for this suite.

• [SLOW TEST:18.621 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":1989,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:23:13.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:23:13.606: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
May  8 11:23:16.768: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:23:17.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5063" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":121,"skipped":2026,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:23:17.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  8 11:23:23.491: INFO: Successfully updated pod "labelsupdatea0de702f-deb5-4045-88a9-910b17cd7127"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:23:25.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7035" for this suite.

• [SLOW TEST:7.707 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":2029,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:23:25.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  8 11:23:30.134: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:23:30.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-319" for this suite.

• [SLOW TEST:5.162 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2032,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:23:30.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:23:30.911: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:23:32.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3385" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":124,"skipped":2055,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:23:32.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-7604/configmap-test-521ce115-f9b6-41bd-be84-1d7997505f01
STEP: Creating a pod to test consume configMaps
May  8 11:23:32.517: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686" in namespace "configmap-7604" to be "Succeeded or Failed"
May  8 11:23:32.537: INFO: Pod "pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686": Phase="Pending", Reason="", readiness=false. Elapsed: 20.209397ms
May  8 11:23:34.683: INFO: Pod "pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165709452s
May  8 11:23:36.695: INFO: Pod "pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178040108s
May  8 11:23:38.808: INFO: Pod "pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.291317787s
STEP: Saw pod success
May  8 11:23:38.809: INFO: Pod "pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686" satisfied condition "Succeeded or Failed"
May  8 11:23:38.812: INFO: Trying to get logs from node kali-worker pod pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686 container env-test: 
STEP: delete the pod
May  8 11:23:38.841: INFO: Waiting for pod pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686 to disappear
May  8 11:23:38.880: INFO: Pod pod-configmaps-cf7d51ec-8af6-420c-a7f2-ecf563639686 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:23:38.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7604" for this suite.

• [SLOW TEST:6.578 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2058,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:23:38.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:23:38.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:23:43.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8142" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2061,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:23:43.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-1a709437-4af7-4792-bfb8-35357bf32d16
STEP: Creating secret with name s-test-opt-upd-6b91ca2f-90a6-4cd1-82b5-76c541af87a2
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1a709437-4af7-4792-bfb8-35357bf32d16
STEP: Updating secret s-test-opt-upd-6b91ca2f-90a6-4cd1-82b5-76c541af87a2
STEP: Creating secret with name s-test-opt-create-d008ee95-0e6f-4028-a459-300d48ea8dcd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:23:51.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1887" for this suite.

• [SLOW TEST:8.226 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2076,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:23:51.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:23:51.435: INFO: Creating deployment "webserver-deployment"
May  8 11:23:51.462: INFO: Waiting for observed generation 1
May  8 11:23:53.556: INFO: Waiting for all required pods to come up
May  8 11:23:53.560: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
May  8 11:24:05.570: INFO: Waiting for deployment "webserver-deployment" to complete
May  8 11:24:05.575: INFO: Updating deployment "webserver-deployment" with a non-existent image
May  8 11:24:05.580: INFO: Updating deployment webserver-deployment
May  8 11:24:05.580: INFO: Waiting for observed generation 2
May  8 11:24:07.604: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
May  8 11:24:07.608: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
May  8 11:24:07.611: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May  8 11:24:07.617: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
May  8 11:24:07.617: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
May  8 11:24:07.620: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May  8 11:24:07.624: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
May  8 11:24:07.624: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
May  8 11:24:07.630: INFO: Updating deployment webserver-deployment
May  8 11:24:07.630: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
May  8 11:24:08.389: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
May  8 11:24:08.393: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  8 11:24:08.637: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-1207 /apis/apps/v1/namespaces/deployment-1207/deployments/webserver-deployment 9935c9b7-bef0-4e01-af32-f3c5d040b1c3 2570832 3 2020-05-08 11:23:51 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-08 11:24:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046d0298  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-08 11:24:06 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-08 11:24:08 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

May  8 11:24:08.850: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-1207 /apis/apps/v1/namespaces/deployment-1207/replicasets/webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 2570879 3 2020-05-08 11:24:05 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 9935c9b7-bef0-4e01-af32-f3c5d040b1c3 0xc0046d0727 0xc0046d0728}] []  [{kube-controller-manager Update apps/v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 57 51 53 99 57 98 55 45 98 101 102 48 45 52 101 48 49 45 97 102 51 50 45 102 51 99 53 100 48 52 48 98 49 99 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046d07a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  8 11:24:08.850: INFO: All old ReplicaSets of Deployment "webserver-deployment":
May  8 11:24:08.850: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-1207 /apis/apps/v1/namespaces/deployment-1207/replicasets/webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 2570868 3 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 9935c9b7-bef0-4e01-af32-f3c5d040b1c3 0xc0046d0807 0xc0046d0808}] []  [{kube-controller-manager Update apps/v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 57 51 53 99 57 98 55 45 98 101 102 48 45 52 101 48 49 45 97 102 51 50 45 102 51 99 53 100 48 52 48 98 49 99 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046d0878  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
May  8 11:24:09.219: INFO: Pod "webserver-deployment-6676bcd6d4-229cx" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-229cx webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-229cx 7a8a5e2d-bd23-47b0-98d2-fcd161533842 2570886 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d0d97 0xc0046d0d98}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-08 11:24:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.219: INFO: Pod "webserver-deployment-6676bcd6d4-2mnmb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2mnmb webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-2mnmb 200d8fe6-4af6-40f8-a17a-3363398d1a57 2570848 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d0f47 0xc0046d0f48}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.219: INFO: Pod "webserver-deployment-6676bcd6d4-2n9jj" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2n9jj webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-2n9jj 4cc606d6-7678-432a-897b-b79c5a3cc936 2570788 0 2020-05-08 11:24:05 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1087 0xc0046d1088}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-08 11:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.219: INFO: Pod "webserver-deployment-6676bcd6d4-5scm7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5scm7 webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-5scm7 a5f44592-1fac-46a5-8e23-ca84aa4bd9c7 2570861 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1237 0xc0046d1238}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.219: INFO: Pod "webserver-deployment-6676bcd6d4-68z5m" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-68z5m webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-68z5m 9b981113-65cc-4e9a-8766-7f4fff0ad0f3 2570813 0 2020-05-08 11:24:06 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1377 0xc0046d1378}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-08 11:24:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.220: INFO: Pod "webserver-deployment-6676bcd6d4-cn9sb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cn9sb webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-cn9sb 3323344c-d0e0-43e0-a2e8-fdfb43a50f47 2570799 0 2020-05-08 11:24:05 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1527 0xc0046d1528}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-08 11:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.220: INFO: Pod "webserver-deployment-6676bcd6d4-cnhh6" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cnhh6 webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-cnhh6 aa3eb8be-c382-4bfb-a45e-4e21e0eae7f4 2570871 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d16f7 0xc0046d16f8}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.220: INFO: Pod "webserver-deployment-6676bcd6d4-hx4hd" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hx4hd webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-hx4hd 3f877a6b-c75f-428d-8b73-bd875e31b5fb 2570866 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1847 0xc0046d1848}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.220: INFO: Pod "webserver-deployment-6676bcd6d4-mxt4k" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mxt4k webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-mxt4k 55d85c5e-c731-4227-925f-d2bb9dc35958 2570817 0 2020-05-08 11:24:06 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1987 0xc0046d1988}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-08 11:24:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.220: INFO: Pod "webserver-deployment-6676bcd6d4-mxtb2" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mxtb2 webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-mxtb2 54b855a9-a56b-42a0-a64c-e24080d93ed0 2570812 0 2020-05-08 11:24:05 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1b47 0xc0046d1b48}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-08 11:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.221: INFO: Pod "webserver-deployment-6676bcd6d4-qh8tw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qh8tw webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-qh8tw 2c1941e8-764c-43fc-8711-ed22bf148266 2570878 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1d07 0xc0046d1d08}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.221: INFO: Pod "webserver-deployment-6676bcd6d4-r2qbn" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-r2qbn webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-r2qbn 0d7805c0-b2cb-41d4-a745-0575f324f12d 2570842 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1e47 0xc0046d1e48}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.221: INFO: Pod "webserver-deployment-6676bcd6d4-zg2vg" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zg2vg webserver-deployment-6676bcd6d4- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-6676bcd6d4-zg2vg 27f225e2-a443-431e-a90b-e1bc25adba4a 2570873 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 92d160d3-5997-425d-ab25-1241e5d4c434 0xc0046d1f87 0xc0046d1f88}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 50 100 49 54 48 100 51 45 53 57 57 55 45 52 50 53 100 45 97 98 50 53 45 49 50 52 49 101 53 100 52 99 52 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.221: INFO: Pod "webserver-deployment-84855cf797-289qr" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-289qr webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-289qr 6486d213-cb5f-4214-87ab-27e08c3ba151 2570748 0 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc0047580c7 0xc0047580c8}] []  [{kube-controller-manager Update v1 2020-05-08 11:23:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.244,StartTime:2020-05-08 11:23:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:24:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6c8f70b0cc789e258657b90e3b0866ab1d90c85a4c05903eec7f141fe2209483,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.222: INFO: Pod "webserver-deployment-84855cf797-2xlxf" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-2xlxf webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-2xlxf 71583cbf-d54e-41ee-aa33-17da5ecfb58e 2570870 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004758277 0xc004758278}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.222: INFO: Pod "webserver-deployment-84855cf797-6jdfr" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-6jdfr webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-6jdfr be8c4435-968c-486d-829b-c2d9aee84fd8 2570696 0 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc0047583b7 0xc0047583b8}] []  [{kube-controller-manager Update v1 2020-05-08 11:23:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:23:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 57 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.193,StartTime:2020-05-08 11:23:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:23:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eb238300d5c632ffbaa1345fa3f9ab208df628eb33cbdcebe6ce8dcd6957a403,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.222: INFO: Pod "webserver-deployment-84855cf797-c5psx" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-c5psx webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-c5psx d0525bd2-c5fa-4001-ba2d-b409c81354a0 2570862 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004758577 0xc004758578}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.222: INFO: Pod "webserver-deployment-84855cf797-c9c2h" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-c9c2h webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-c9c2h abe33209-2021-48e2-9371-b13a91017a12 2570745 0 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc0047586a7 0xc0047586a8}] []  [{kube-controller-manager Update v1 2020-05-08 11:23:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.246,StartTime:2020-05-08 11:23:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:24:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0b8e120b20c8e935e730e3f85f5081c13d010ed7d72a6403f1166ac50b8e6c3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.223: INFO: Pod "webserver-deployment-84855cf797-d7qgw" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-d7qgw webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-d7qgw 010be05c-f19b-42ab-b9ef-a687e7456ecd 2570857 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004758867 0xc004758868}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.223: INFO: Pod "webserver-deployment-84855cf797-km24l" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-km24l webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-km24l 4157911d-b103-47e6-8d86-d708fbc636af 2570854 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004758997 0xc004758998}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.223: INFO: Pod "webserver-deployment-84855cf797-km9h4" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-km9h4 webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-km9h4 3657e54f-3dec-4543-9093-a337fabc3121 2570883 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004758ac7 0xc004758ac8}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-08 11:24:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.224: INFO: Pod "webserver-deployment-84855cf797-kspzw" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kspzw webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-kspzw 7898f2b5-aea0-4858-ac3b-de8da606c491 2570688 0 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004758c57 0xc004758c58}] []  [{kube-controller-manager Update v1 2020-05-08 11:23:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:23:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.240,StartTime:2020-05-08 11:23:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:23:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://07ef7638f49dce694060dd1aed50d0e53002c62e91963c9dfc1d23b040cfb2e3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.224: INFO: Pod "webserver-deployment-84855cf797-m822f" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-m822f webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-m822f 54fb9b82-6481-4db5-86cb-7871c09b8ab6 2570864 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004758e07 0xc004758e08}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.224: INFO: Pod "webserver-deployment-84855cf797-n5z4c" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-n5z4c webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-n5z4c 131be8db-3dfa-4691-9261-30ae2e8e5514 2570855 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004758f37 0xc004758f38}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.224: INFO: Pod "webserver-deployment-84855cf797-nbqwq" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-nbqwq webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-nbqwq 4627b36d-ec1c-4312-898d-8d1df94955bd 2570863 0 2020-05-08 11:24:07 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004759067 0xc004759068}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-08 11:24:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.225: INFO: Pod "webserver-deployment-84855cf797-qcg25" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qcg25 webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-qcg25 3bb2667a-d9d8-4e17-a6c9-5e020865a05b 2570722 0 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc0047591f7 0xc0047591f8}] []  [{kube-controller-manager Update v1 2020-05-08 11:23:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.195,StartTime:2020-05-08 11:23:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:24:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ef87e78befa6adadb49c055d33b5f3792dc5b8b6ffe9fa58586a5a50a2d2430b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.225: INFO: Pod "webserver-deployment-84855cf797-rgg2n" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rgg2n webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-rgg2n 61a2a1ea-2422-4d7d-8464-4f28d6a219c8 2570751 0 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc0047593c7 0xc0047593c8}] []  [{kube-controller-manager Update v1 2020-05-08 11:23:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.243,StartTime:2020-05-08 11:23:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:24:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6873402f91be1b98fa4f458e64ab014b0d5d7b39bdf2c4ffd93f8566d90f9d89,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.225: INFO: Pod "webserver-deployment-84855cf797-rp7h2" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rp7h2 webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-rp7h2 a4ba7ae0-4aff-4bdf-bb38-32bd764719b8 2570874 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004759577 0xc004759578}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.226: INFO: Pod "webserver-deployment-84855cf797-vktl9" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vktl9 webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-vktl9 079a6828-ddae-40e2-9a18-da6b7bdb2c4d 2570710 0 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc0047596b7 0xc0047596b8}] []  [{kube-controller-manager Update v1 2020-05-08 11:23:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 57 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.194,StartTime:2020-05-08 11:23:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:23:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://711e22f2ba0c47b2077aa3ee8464e8b33e8a72f77e56f24782d0d642736f916d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.226: INFO: Pod "webserver-deployment-84855cf797-w8g4g" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-w8g4g webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-w8g4g e3d508ae-37f8-4826-a28c-6f3a3661f403 2570865 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004759877 0xc004759878}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.226: INFO: Pod "webserver-deployment-84855cf797-xsprb" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xsprb webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-xsprb 2ddeec4e-abca-417c-b9d0-62fbd02e4e22 2570838 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc0047599a7 0xc0047599a8}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.226: INFO: Pod "webserver-deployment-84855cf797-zkmtp" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zkmtp webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-zkmtp 15c90ad0-5434-4829-bf35-ef8dd8df0617 2570856 0 2020-05-08 11:24:08 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004759ad7 0xc004759ad8}] []  [{kube-controller-manager Update v1 2020-05-08 11:24:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:24:09.226: INFO: Pod "webserver-deployment-84855cf797-ztfjk" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ztfjk webserver-deployment-84855cf797- deployment-1207 /api/v1/namespaces/deployment-1207/pods/webserver-deployment-84855cf797-ztfjk 08a4b3c0-824a-4149-abc4-59b1d426b0c1 2570704 0 2020-05-08 11:23:51 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 74d48b07-a3b3-487e-9112-3b80a86c229b 0xc004759c07 0xc004759c08}] []  [{kube-controller-manager Update v1 2020-05-08 11:23:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 52 100 52 56 98 48 55 45 97 51 98 51 45 52 56 55 101 45 57 49 49 50 45 51 98 56 48 97 56 54 99 50 50 57 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:24:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8sgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8sgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8sgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:23:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.241,StartTime:2020-05-08 11:23:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:23:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5581c243c4af8e9f1ebec4dee1bbbb4fe5c8ad06729b2b8a4df53c05481e2a99,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:24:09.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1207" for this suite.

• [SLOW TEST:19.095 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":128,"skipped":2128,"failed":0}
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:24:10.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:24:12.732: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82" in namespace "security-context-test-2457" to be "Succeeded or Failed"
May  8 11:24:13.079: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 346.812683ms
May  8 11:24:15.540: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.807737337s
May  8 11:24:17.680: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.948071187s
May  8 11:24:19.940: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 7.208468232s
May  8 11:24:22.246: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 9.514107601s
May  8 11:24:24.280: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 11.54806082s
May  8 11:24:26.863: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 14.131132582s
May  8 11:24:29.443: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 16.711451327s
May  8 11:24:31.856: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Pending", Reason="", readiness=false. Elapsed: 19.124641971s
May  8 11:24:34.158: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.42650734s
May  8 11:24:34.158: INFO: Pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82" satisfied condition "Succeeded or Failed"
May  8 11:24:34.611: INFO: Got logs for pod "busybox-privileged-false-288d8743-ff99-491d-a605-b159dd191c82": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:24:34.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2457" for this suite.

• [SLOW TEST:24.361 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2128,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:24:34.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:24:35.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
May  8 11:24:36.352: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T11:24:36Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-08T11:24:36Z]] name:name1 resourceVersion:2571219 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:000cb15c-8fe8-4517-b080-e4e356e9af70] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
May  8 11:24:46.358: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T11:24:46Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-08T11:24:46Z]] name:name2 resourceVersion:2571265 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1e961b62-0472-464d-9288-b5ba0f89178a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
May  8 11:24:56.364: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T11:24:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-08T11:24:56Z]] name:name1 resourceVersion:2571296 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:000cb15c-8fe8-4517-b080-e4e356e9af70] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
May  8 11:25:06.371: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T11:24:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-08T11:25:06Z]] name:name2 resourceVersion:2571326 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1e961b62-0472-464d-9288-b5ba0f89178a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
May  8 11:25:16.381: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T11:24:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-08T11:24:56Z]] name:name1 resourceVersion:2571356 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:000cb15c-8fe8-4517-b080-e4e356e9af70] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
May  8 11:25:26.447: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-08T11:24:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-08T11:25:06Z]] name:name2 resourceVersion:2571386 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1e961b62-0472-464d-9288-b5ba0f89178a] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:25:36.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2816" for this suite.

• [SLOW TEST:62.152 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":130,"skipped":2168,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:25:36.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:25:37.065: INFO: Pod name cleanup-pod: Found 0 pods out of 1
May  8 11:25:42.078: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  8 11:25:42.078: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  8 11:25:42.119: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-3685 /apis/apps/v1/namespaces/deployment-3685/deployments/test-cleanup-deployment 56095f64-61bd-4660-bd62-e049a0b2fb72 2571454 1 2020-05-08 11:25:42 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2020-05-08 11:25:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00542bda8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

May  8 11:25:42.145: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-3685 /apis/apps/v1/namespaces/deployment-3685/replicasets/test-cleanup-deployment-b4867b47f 45d7bc80-d6db-410f-8116-dee123d05464 2571456 1 2020-05-08 11:25:42 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 56095f64-61bd-4660-bd62-e049a0b2fb72 0xc0030db540 0xc0030db541}] []  [{kube-controller-manager Update apps/v1 2020-05-08 11:25:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 54 48 57 53 102 54 52 45 54 49 98 100 45 52 54 54 48 45 98 100 54 50 45 101 48 52 57 97 48 98 50 102 98 55 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030db5c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  8 11:25:42.145: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
May  8 11:25:42.145: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-3685 /apis/apps/v1/namespaces/deployment-3685/replicasets/test-cleanup-controller c97ebbd7-c9f1-4762-9aa5-0a5c0adbcbff 2571455 1 2020-05-08 11:25:37 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 56095f64-61bd-4660-bd62-e049a0b2fb72 0xc0030db427 0xc0030db428}] []  [{e2e.test Update apps/v1 2020-05-08 11:25:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-08 11:25:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 54 48 57 53 102 54 52 45 54 49 98 100 45 52 54 54 48 45 98 100 54 50 45 101 48 52 57 97 48 98 50 102 98 55 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0030db4d8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  8 11:25:42.223: INFO: Pod "test-cleanup-controller-bphfj" is available:
&Pod{ObjectMeta:{test-cleanup-controller-bphfj test-cleanup-controller- deployment-3685 /api/v1/namespaces/deployment-3685/pods/test-cleanup-controller-bphfj ce812239-6391-47c6-865b-8ba1d59ac1e2 2571447 0 2020-05-08 11:25:37 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller c97ebbd7-c9f1-4762-9aa5-0a5c0adbcbff 0xc0030dba77 0xc0030dba78}] []  [{kube-controller-manager Update v1 2020-05-08 11:25:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 57 55 101 98 98 100 55 45 99 57 102 49 45 52 55 54 50 45 57 97 97 53 45 48 97 53 99 48 97 100 98 99 98 102 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:25:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nfv9j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nfv9j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nfv9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:25:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:25:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:25:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:25:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.210,StartTime:2020-05-08 11:25:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:25:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0eb17e194ef5f77a3c806fb4eeda44245a7a447bc75366fb6fffd6992b8000a3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:25:42.223: INFO: Pod "test-cleanup-deployment-b4867b47f-hprd2" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-hprd2 test-cleanup-deployment-b4867b47f- deployment-3685 /api/v1/namespaces/deployment-3685/pods/test-cleanup-deployment-b4867b47f-hprd2 a9412709-657e-40a1-8a0f-9d656e33cd2c 2571460 0 2020-05-08 11:25:42 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 45d7bc80-d6db-410f-8116-dee123d05464 0xc0030dbc30 0xc0030dbc31}] []  [{kube-controller-manager Update v1 2020-05-08 11:25:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 53 100 55 98 99 56 48 45 100 54 100 98 45 52 49 48 102 45 56 49 49 54 45 100 101 101 49 50 51 100 48 53 52 54 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nfv9j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nfv9j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nfv9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:25:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:25:42.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3685" for this suite.

• [SLOW TEST:5.400 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":131,"skipped":2169,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:25:42.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
May  8 11:25:42.500: INFO: Waiting up to 5m0s for pod "pod-9265f1e1-d01a-49b4-a371-15e6e4599fca" in namespace "emptydir-1310" to be "Succeeded or Failed"
May  8 11:25:42.535: INFO: Pod "pod-9265f1e1-d01a-49b4-a371-15e6e4599fca": Phase="Pending", Reason="", readiness=false. Elapsed: 34.852438ms
May  8 11:25:44.798: INFO: Pod "pod-9265f1e1-d01a-49b4-a371-15e6e4599fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298519475s
May  8 11:25:46.815: INFO: Pod "pod-9265f1e1-d01a-49b4-a371-15e6e4599fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315098648s
May  8 11:25:48.819: INFO: Pod "pod-9265f1e1-d01a-49b4-a371-15e6e4599fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.319126947s
STEP: Saw pod success
May  8 11:25:48.819: INFO: Pod "pod-9265f1e1-d01a-49b4-a371-15e6e4599fca" satisfied condition "Succeeded or Failed"
May  8 11:25:48.822: INFO: Trying to get logs from node kali-worker2 pod pod-9265f1e1-d01a-49b4-a371-15e6e4599fca container test-container: 
STEP: delete the pod
May  8 11:25:48.900: INFO: Waiting for pod pod-9265f1e1-d01a-49b4-a371-15e6e4599fca to disappear
May  8 11:25:48.915: INFO: Pod pod-9265f1e1-d01a-49b4-a371-15e6e4599fca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:25:48.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1310" for this suite.

• [SLOW TEST:6.579 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2195,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:25:48.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-e72afbd7-0197-481c-a156-3163b1749a99
May  8 11:25:49.059: INFO: Pod name my-hostname-basic-e72afbd7-0197-481c-a156-3163b1749a99: Found 0 pods out of 1
May  8 11:25:54.070: INFO: Pod name my-hostname-basic-e72afbd7-0197-481c-a156-3163b1749a99: Found 1 pods out of 1
May  8 11:25:54.070: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e72afbd7-0197-481c-a156-3163b1749a99" are running
May  8 11:25:54.072: INFO: Pod "my-hostname-basic-e72afbd7-0197-481c-a156-3163b1749a99-blkl4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:25:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:25:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:25:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-08 11:25:49 +0000 UTC Reason: Message:}])
May  8 11:25:54.072: INFO: Trying to dial the pod
May  8 11:25:59.084: INFO: Controller my-hostname-basic-e72afbd7-0197-481c-a156-3163b1749a99: Got expected result from replica 1 [my-hostname-basic-e72afbd7-0197-481c-a156-3163b1749a99-blkl4]: "my-hostname-basic-e72afbd7-0197-481c-a156-3163b1749a99-blkl4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:25:59.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8811" for this suite.

• [SLOW TEST:10.147 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":133,"skipped":2196,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:25:59.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:03.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1279" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":134,"skipped":2198,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:03.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
May  8 11:26:07.536: INFO: Pod pod-hostip-7b9fd4f0-1da7-4cbb-b0ec-2cc31b698917 has hostIP: 172.17.0.18
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:07.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4682" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2206,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:07.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:07.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-880" for this suite.
STEP: Destroying namespace "nspatchtest-a3c6411b-2670-47cc-97e4-231ad2f0fe86-1427" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":136,"skipped":2223,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:07.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:26:08.145: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Pending, waiting for it to be Running (with Ready = true)
May  8 11:26:10.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Pending, waiting for it to be Running (with Ready = true)
May  8 11:26:12.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:14.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:16.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:18.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:20.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:22.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:24.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:26.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:28.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = false)
May  8 11:26:30.150: INFO: The status of Pod test-webserver-c1de7e57-ab07-425e-8ffa-6e68f2c8a969 is Running (Ready = true)
May  8 11:26:30.153: INFO: Container started at 2020-05-08 11:26:10 +0000 UTC, pod became ready at 2020-05-08 11:26:29 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:30.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8361" for this suite.

• [SLOW TEST:22.398 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2224,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:30.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May  8 11:26:30.254: INFO: Waiting up to 5m0s for pod "pod-ffd47fc0-b2a8-4076-919e-03edff44aa93" in namespace "emptydir-4587" to be "Succeeded or Failed"
May  8 11:26:30.272: INFO: Pod "pod-ffd47fc0-b2a8-4076-919e-03edff44aa93": Phase="Pending", Reason="", readiness=false. Elapsed: 17.889762ms
May  8 11:26:32.276: INFO: Pod "pod-ffd47fc0-b2a8-4076-919e-03edff44aa93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022027259s
May  8 11:26:34.279: INFO: Pod "pod-ffd47fc0-b2a8-4076-919e-03edff44aa93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025057284s
May  8 11:26:36.294: INFO: Pod "pod-ffd47fc0-b2a8-4076-919e-03edff44aa93": Phase="Running", Reason="", readiness=true. Elapsed: 6.040269863s
May  8 11:26:38.299: INFO: Pod "pod-ffd47fc0-b2a8-4076-919e-03edff44aa93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044599778s
STEP: Saw pod success
May  8 11:26:38.299: INFO: Pod "pod-ffd47fc0-b2a8-4076-919e-03edff44aa93" satisfied condition "Succeeded or Failed"
May  8 11:26:38.302: INFO: Trying to get logs from node kali-worker pod pod-ffd47fc0-b2a8-4076-919e-03edff44aa93 container test-container: 
STEP: delete the pod
May  8 11:26:38.347: INFO: Waiting for pod pod-ffd47fc0-b2a8-4076-919e-03edff44aa93 to disappear
May  8 11:26:38.354: INFO: Pod pod-ffd47fc0-b2a8-4076-919e-03edff44aa93 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:38.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4587" for this suite.

• [SLOW TEST:8.199 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2257,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:38.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:26:38.443: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0badb741-3fd6-4fda-8202-7401ceb0adff" in namespace "projected-2694" to be "Succeeded or Failed"
May  8 11:26:38.446: INFO: Pod "downwardapi-volume-0badb741-3fd6-4fda-8202-7401ceb0adff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.999418ms
May  8 11:26:40.552: INFO: Pod "downwardapi-volume-0badb741-3fd6-4fda-8202-7401ceb0adff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108854584s
May  8 11:26:42.556: INFO: Pod "downwardapi-volume-0badb741-3fd6-4fda-8202-7401ceb0adff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112709561s
STEP: Saw pod success
May  8 11:26:42.556: INFO: Pod "downwardapi-volume-0badb741-3fd6-4fda-8202-7401ceb0adff" satisfied condition "Succeeded or Failed"
May  8 11:26:42.559: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-0badb741-3fd6-4fda-8202-7401ceb0adff container client-container: 
STEP: delete the pod
May  8 11:26:42.664: INFO: Waiting for pod downwardapi-volume-0badb741-3fd6-4fda-8202-7401ceb0adff to disappear
May  8 11:26:42.667: INFO: Pod downwardapi-volume-0badb741-3fd6-4fda-8202-7401ceb0adff no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:42.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2694" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2289,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:42.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:46.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4918" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2354,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:46.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May  8 11:26:47.194: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May  8 11:26:49.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534007, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534007, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534007, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534007, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:26:52.240: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:26:52.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:53.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1450" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:6.919 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":141,"skipped":2365,"failed":0}
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:53.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
May  8 11:26:53.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3433'
May  8 11:26:54.470: INFO: stderr: ""
May  8 11:26:54.470: INFO: stdout: "pod/pause created\n"
May  8 11:26:54.470: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
May  8 11:26:54.470: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3433" to be "running and ready"
May  8 11:26:54.485: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.849644ms
May  8 11:26:56.490: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01914995s
May  8 11:26:58.494: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.023426795s
May  8 11:26:58.494: INFO: Pod "pause" satisfied condition "running and ready"
May  8 11:26:58.494: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
May  8 11:26:58.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3433'
May  8 11:26:58.610: INFO: stderr: ""
May  8 11:26:58.610: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
May  8 11:26:58.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3433'
May  8 11:26:58.696: INFO: stderr: ""
May  8 11:26:58.696: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
May  8 11:26:58.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3433'
May  8 11:26:58.842: INFO: stderr: ""
May  8 11:26:58.842: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
May  8 11:26:58.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3433'
May  8 11:26:58.933: INFO: stderr: ""
May  8 11:26:58.933: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
May  8 11:26:58.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3433'
May  8 11:26:59.054: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  8 11:26:59.054: INFO: stdout: "pod \"pause\" force deleted\n"
May  8 11:26:59.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3433'
May  8 11:26:59.156: INFO: stderr: "No resources found in kubectl-3433 namespace.\n"
May  8 11:26:59.156: INFO: stdout: ""
May  8 11:26:59.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3433 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  8 11:26:59.342: INFO: stderr: ""
May  8 11:26:59.342: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:26:59.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3433" for this suite.

• [SLOW TEST:5.623 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":142,"skipped":2365,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:26:59.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May  8 11:26:59.572: INFO: Waiting up to 5m0s for pod "pod-7f7f429e-e1d4-4071-8560-2a864f0518e7" in namespace "emptydir-8223" to be "Succeeded or Failed"
May  8 11:26:59.704: INFO: Pod "pod-7f7f429e-e1d4-4071-8560-2a864f0518e7": Phase="Pending", Reason="", readiness=false. Elapsed: 131.948583ms
May  8 11:27:01.708: INFO: Pod "pod-7f7f429e-e1d4-4071-8560-2a864f0518e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13560092s
May  8 11:27:03.858: INFO: Pod "pod-7f7f429e-e1d4-4071-8560-2a864f0518e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.285170092s
STEP: Saw pod success
May  8 11:27:03.858: INFO: Pod "pod-7f7f429e-e1d4-4071-8560-2a864f0518e7" satisfied condition "Succeeded or Failed"
May  8 11:27:04.049: INFO: Trying to get logs from node kali-worker pod pod-7f7f429e-e1d4-4071-8560-2a864f0518e7 container test-container: 
STEP: delete the pod
May  8 11:27:04.117: INFO: Waiting for pod pod-7f7f429e-e1d4-4071-8560-2a864f0518e7 to disappear
May  8 11:27:04.134: INFO: Pod pod-7f7f429e-e1d4-4071-8560-2a864f0518e7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:27:04.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8223" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2367,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:27:04.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
May  8 11:27:09.884: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:27:10.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9758" for this suite.

• [SLOW TEST:6.766 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":144,"skipped":2393,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:27:10.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  8 11:27:15.621: INFO: Successfully updated pod "labelsupdate84684787-7f55-4d0e-a5a6-4d7f2a7b0333"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:27:17.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9415" for this suite.

• [SLOW TEST:6.751 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2399,"failed":0}
S
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:27:17.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:27:17.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9231
I0508 11:27:17.766868       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9231, replica count: 1
I0508 11:27:18.817435       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 11:27:19.817676       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 11:27:20.817931       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 11:27:21.818208       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  8 11:27:22.126: INFO: Created: latency-svc-dszts
May  8 11:27:22.218: INFO: Got endpoints: latency-svc-dszts [299.9425ms]
May  8 11:27:22.474: INFO: Created: latency-svc-9m2vw
May  8 11:27:22.571: INFO: Got endpoints: latency-svc-9m2vw [353.451902ms]
May  8 11:27:22.601: INFO: Created: latency-svc-8lndk
May  8 11:27:22.618: INFO: Got endpoints: latency-svc-8lndk [399.663963ms]
May  8 11:27:22.635: INFO: Created: latency-svc-99xg6
May  8 11:27:22.720: INFO: Got endpoints: latency-svc-99xg6 [501.752251ms]
May  8 11:27:22.751: INFO: Created: latency-svc-lb78k
May  8 11:27:22.766: INFO: Got endpoints: latency-svc-lb78k [544.898281ms]
May  8 11:27:22.858: INFO: Created: latency-svc-t4zx4
May  8 11:27:22.867: INFO: Got endpoints: latency-svc-t4zx4 [645.438332ms]
May  8 11:27:23.123: INFO: Created: latency-svc-6ml85
May  8 11:27:23.303: INFO: Got endpoints: latency-svc-6ml85 [1.081837287s]
May  8 11:27:23.356: INFO: Created: latency-svc-fxc7q
May  8 11:27:23.445: INFO: Got endpoints: latency-svc-fxc7q [1.22357023s]
May  8 11:27:23.476: INFO: Created: latency-svc-pvkxq
May  8 11:27:23.530: INFO: Got endpoints: latency-svc-pvkxq [1.308948569s]
May  8 11:27:23.632: INFO: Created: latency-svc-gcltr
May  8 11:27:23.661: INFO: Got endpoints: latency-svc-gcltr [1.43995716s]
May  8 11:27:23.740: INFO: Created: latency-svc-tvmh5
May  8 11:27:23.753: INFO: Got endpoints: latency-svc-tvmh5 [1.535010085s]
May  8 11:27:23.782: INFO: Created: latency-svc-qsrgt
May  8 11:27:23.799: INFO: Got endpoints: latency-svc-qsrgt [1.578370454s]
May  8 11:27:23.902: INFO: Created: latency-svc-ctwnc
May  8 11:27:23.914: INFO: Got endpoints: latency-svc-ctwnc [1.689179244s]
May  8 11:27:23.938: INFO: Created: latency-svc-v2648
May  8 11:27:23.980: INFO: Got endpoints: latency-svc-v2648 [1.759210977s]
May  8 11:27:24.077: INFO: Created: latency-svc-2gssr
May  8 11:27:24.100: INFO: Got endpoints: latency-svc-2gssr [1.878894437s]
May  8 11:27:24.142: INFO: Created: latency-svc-xlfxm
May  8 11:27:24.295: INFO: Got endpoints: latency-svc-xlfxm [2.074652332s]
May  8 11:27:24.504: INFO: Created: latency-svc-658lm
May  8 11:27:24.521: INFO: Got endpoints: latency-svc-658lm [1.949674584s]
May  8 11:27:24.557: INFO: Created: latency-svc-x9mz5
May  8 11:27:24.574: INFO: Got endpoints: latency-svc-x9mz5 [1.956230965s]
May  8 11:27:24.676: INFO: Created: latency-svc-gkgll
May  8 11:27:24.688: INFO: Got endpoints: latency-svc-gkgll [1.968549311s]
May  8 11:27:24.816: INFO: Created: latency-svc-q7r5l
May  8 11:27:24.826: INFO: Got endpoints: latency-svc-q7r5l [2.059921189s]
May  8 11:27:24.850: INFO: Created: latency-svc-vkhtt
May  8 11:27:24.863: INFO: Got endpoints: latency-svc-vkhtt [1.995750829s]
May  8 11:27:24.904: INFO: Created: latency-svc-l5lnh
May  8 11:27:24.947: INFO: Got endpoints: latency-svc-l5lnh [1.64415724s]
May  8 11:27:24.964: INFO: Created: latency-svc-wf5kg
May  8 11:27:24.985: INFO: Got endpoints: latency-svc-wf5kg [1.540410376s]
May  8 11:27:25.109: INFO: Created: latency-svc-wnbl9
May  8 11:27:25.113: INFO: Got endpoints: latency-svc-wnbl9 [1.582136631s]
May  8 11:27:25.156: INFO: Created: latency-svc-wllxb
May  8 11:27:25.204: INFO: Got endpoints: latency-svc-wllxb [1.54267555s]
May  8 11:27:25.265: INFO: Created: latency-svc-47v26
May  8 11:27:25.282: INFO: Got endpoints: latency-svc-47v26 [1.528524233s]
May  8 11:27:25.330: INFO: Created: latency-svc-lnrtb
May  8 11:27:25.427: INFO: Got endpoints: latency-svc-lnrtb [1.628036397s]
May  8 11:27:25.649: INFO: Created: latency-svc-zr5kc
May  8 11:27:25.835: INFO: Got endpoints: latency-svc-zr5kc [1.921834535s]
May  8 11:27:25.930: INFO: Created: latency-svc-6p6qz
May  8 11:27:25.935: INFO: Got endpoints: latency-svc-6p6qz [1.954430485s]
May  8 11:27:25.980: INFO: Created: latency-svc-nzfwm
May  8 11:27:25.992: INFO: Got endpoints: latency-svc-nzfwm [1.891694302s]
May  8 11:27:26.015: INFO: Created: latency-svc-ndmk2
May  8 11:27:26.097: INFO: Got endpoints: latency-svc-ndmk2 [1.801579077s]
May  8 11:27:26.351: INFO: Created: latency-svc-w8rv9
May  8 11:27:26.450: INFO: Got endpoints: latency-svc-w8rv9 [1.92911285s]
May  8 11:27:26.464: INFO: Created: latency-svc-s7jdm
May  8 11:27:26.478: INFO: Got endpoints: latency-svc-s7jdm [1.903869156s]
May  8 11:27:26.499: INFO: Created: latency-svc-nfwrd
May  8 11:27:26.520: INFO: Got endpoints: latency-svc-nfwrd [1.831927457s]
May  8 11:27:26.620: INFO: Created: latency-svc-rcpft
May  8 11:27:26.653: INFO: Got endpoints: latency-svc-rcpft [1.826577269s]
May  8 11:27:26.680: INFO: Created: latency-svc-lfp5d
May  8 11:27:26.732: INFO: Got endpoints: latency-svc-lfp5d [1.868988656s]
May  8 11:27:26.759: INFO: Created: latency-svc-6bdnw
May  8 11:27:26.773: INFO: Got endpoints: latency-svc-6bdnw [1.825705338s]
May  8 11:27:26.824: INFO: Created: latency-svc-s786t
May  8 11:27:26.864: INFO: Got endpoints: latency-svc-s786t [1.878041707s]
May  8 11:27:26.914: INFO: Created: latency-svc-ddzbh
May  8 11:27:26.924: INFO: Got endpoints: latency-svc-ddzbh [1.811189127s]
May  8 11:27:27.001: INFO: Created: latency-svc-tkx4f
May  8 11:27:27.020: INFO: Got endpoints: latency-svc-tkx4f [1.815632991s]
May  8 11:27:27.065: INFO: Created: latency-svc-r5qbl
May  8 11:27:27.146: INFO: Got endpoints: latency-svc-r5qbl [1.863700937s]
May  8 11:27:27.172: INFO: Created: latency-svc-s2qwd
May  8 11:27:27.188: INFO: Got endpoints: latency-svc-s2qwd [1.760622156s]
May  8 11:27:27.223: INFO: Created: latency-svc-rdxbt
May  8 11:27:27.237: INFO: Got endpoints: latency-svc-rdxbt [1.401381422s]
May  8 11:27:27.283: INFO: Created: latency-svc-kjkl2
May  8 11:27:27.286: INFO: Got endpoints: latency-svc-kjkl2 [1.351609518s]
May  8 11:27:27.346: INFO: Created: latency-svc-whzmw
May  8 11:27:27.375: INFO: Got endpoints: latency-svc-whzmw [1.3830936s]
May  8 11:27:27.448: INFO: Created: latency-svc-dmldl
May  8 11:27:27.472: INFO: Got endpoints: latency-svc-dmldl [1.374602039s]
May  8 11:27:27.564: INFO: Created: latency-svc-tmm6s
May  8 11:27:27.567: INFO: Got endpoints: latency-svc-tmm6s [1.116446911s]
May  8 11:27:27.616: INFO: Created: latency-svc-9wh8v
May  8 11:27:27.628: INFO: Got endpoints: latency-svc-9wh8v [1.149982999s]
May  8 11:27:27.646: INFO: Created: latency-svc-7pspx
May  8 11:27:27.708: INFO: Got endpoints: latency-svc-7pspx [1.18783883s]
May  8 11:27:27.725: INFO: Created: latency-svc-4kz5k
May  8 11:27:27.748: INFO: Got endpoints: latency-svc-4kz5k [1.095265287s]
May  8 11:27:27.870: INFO: Created: latency-svc-mxq9l
May  8 11:27:27.875: INFO: Got endpoints: latency-svc-mxq9l [1.143353114s]
May  8 11:27:27.910: INFO: Created: latency-svc-2dbfx
May  8 11:27:27.923: INFO: Got endpoints: latency-svc-2dbfx [1.149957201s]
May  8 11:27:27.946: INFO: Created: latency-svc-bmm5k
May  8 11:27:27.959: INFO: Got endpoints: latency-svc-bmm5k [1.094996513s]
May  8 11:27:28.013: INFO: Created: latency-svc-gvtfr
May  8 11:27:28.042: INFO: Got endpoints: latency-svc-gvtfr [1.118296221s]
May  8 11:27:28.042: INFO: Created: latency-svc-4ll92
May  8 11:27:28.066: INFO: Got endpoints: latency-svc-4ll92 [1.04648879s]
May  8 11:27:28.103: INFO: Created: latency-svc-96flr
May  8 11:27:28.175: INFO: Got endpoints: latency-svc-96flr [1.029285772s]
May  8 11:27:28.192: INFO: Created: latency-svc-l49cl
May  8 11:27:28.207: INFO: Got endpoints: latency-svc-l49cl [1.018340855s]
May  8 11:27:28.246: INFO: Created: latency-svc-zk6bz
May  8 11:27:28.254: INFO: Got endpoints: latency-svc-zk6bz [1.017454373s]
May  8 11:27:28.319: INFO: Created: latency-svc-l2plr
May  8 11:27:28.354: INFO: Got endpoints: latency-svc-l2plr [1.068146748s]
May  8 11:27:28.355: INFO: Created: latency-svc-9qhsf
May  8 11:27:28.384: INFO: Got endpoints: latency-svc-9qhsf [1.008825001s]
May  8 11:27:28.414: INFO: Created: latency-svc-lfzsv
May  8 11:27:28.451: INFO: Got endpoints: latency-svc-lfzsv [978.650193ms]
May  8 11:27:28.468: INFO: Created: latency-svc-f74ww
May  8 11:27:28.498: INFO: Got endpoints: latency-svc-f74ww [931.416047ms]
May  8 11:27:28.594: INFO: Created: latency-svc-cllzp
May  8 11:27:28.610: INFO: Got endpoints: latency-svc-cllzp [982.011216ms]
May  8 11:27:28.685: INFO: Created: latency-svc-c64zd
May  8 11:27:28.774: INFO: Got endpoints: latency-svc-c64zd [1.06568114s]
May  8 11:27:28.799: INFO: Created: latency-svc-p6xjb
May  8 11:27:28.845: INFO: Got endpoints: latency-svc-p6xjb [1.096516141s]
May  8 11:27:28.864: INFO: Created: latency-svc-frdbx
May  8 11:27:28.912: INFO: Got endpoints: latency-svc-frdbx [1.036523643s]
May  8 11:27:28.943: INFO: Created: latency-svc-xsnf7
May  8 11:27:28.977: INFO: Got endpoints: latency-svc-xsnf7 [1.054030976s]
May  8 11:27:29.056: INFO: Created: latency-svc-ghftz
May  8 11:27:29.075: INFO: Got endpoints: latency-svc-ghftz [1.11587379s]
May  8 11:27:29.128: INFO: Created: latency-svc-wjh2k
May  8 11:27:29.141: INFO: Got endpoints: latency-svc-wjh2k [1.0991046s]
May  8 11:27:29.243: INFO: Created: latency-svc-rwj8p
May  8 11:27:29.267: INFO: Got endpoints: latency-svc-rwj8p [1.200305522s]
May  8 11:27:29.308: INFO: Created: latency-svc-45vf9
May  8 11:27:29.328: INFO: Got endpoints: latency-svc-45vf9 [1.152675096s]
May  8 11:27:29.427: INFO: Created: latency-svc-zlzqv
May  8 11:27:29.448: INFO: Got endpoints: latency-svc-zlzqv [1.241773217s]
May  8 11:27:29.482: INFO: Created: latency-svc-shz6d
May  8 11:27:29.588: INFO: Got endpoints: latency-svc-shz6d [1.333585152s]
May  8 11:27:29.590: INFO: Created: latency-svc-7jjn5
May  8 11:27:29.611: INFO: Got endpoints: latency-svc-7jjn5 [1.256227033s]
May  8 11:27:29.638: INFO: Created: latency-svc-2kcgn
May  8 11:27:29.677: INFO: Got endpoints: latency-svc-2kcgn [1.293610524s]
May  8 11:27:29.753: INFO: Created: latency-svc-r8xrb
May  8 11:27:29.767: INFO: Got endpoints: latency-svc-r8xrb [1.316120435s]
May  8 11:27:29.801: INFO: Created: latency-svc-kgb2g
May  8 11:27:29.840: INFO: Got endpoints: latency-svc-kgb2g [1.341132706s]
May  8 11:27:29.906: INFO: Created: latency-svc-bxsbz
May  8 11:27:29.948: INFO: Got endpoints: latency-svc-bxsbz [1.337917996s]
May  8 11:27:29.981: INFO: Created: latency-svc-g74fx
May  8 11:27:29.996: INFO: Got endpoints: latency-svc-g74fx [1.221593123s]
May  8 11:27:30.055: INFO: Created: latency-svc-x4fz9
May  8 11:27:30.077: INFO: Got endpoints: latency-svc-x4fz9 [1.232099411s]
May  8 11:27:30.107: INFO: Created: latency-svc-cq5hs
May  8 11:27:30.116: INFO: Got endpoints: latency-svc-cq5hs [1.204158407s]
May  8 11:27:30.150: INFO: Created: latency-svc-8kncs
May  8 11:27:30.193: INFO: Got endpoints: latency-svc-8kncs [1.216110245s]
May  8 11:27:30.208: INFO: Created: latency-svc-75qkp
May  8 11:27:30.251: INFO: Got endpoints: latency-svc-75qkp [1.176418036s]
May  8 11:27:30.281: INFO: Created: latency-svc-q2c7x
May  8 11:27:30.332: INFO: Got endpoints: latency-svc-q2c7x [1.19008541s]
May  8 11:27:30.348: INFO: Created: latency-svc-zkkwp
May  8 11:27:30.364: INFO: Got endpoints: latency-svc-zkkwp [1.097328645s]
May  8 11:27:30.388: INFO: Created: latency-svc-fsc8q
May  8 11:27:30.413: INFO: Got endpoints: latency-svc-fsc8q [1.085131899s]
May  8 11:27:30.468: INFO: Created: latency-svc-fg9p9
May  8 11:27:30.472: INFO: Got endpoints: latency-svc-fg9p9 [1.023170822s]
May  8 11:27:30.545: INFO: Created: latency-svc-mkmmk
May  8 11:27:30.630: INFO: Got endpoints: latency-svc-mkmmk [1.042208616s]
May  8 11:27:30.633: INFO: Created: latency-svc-xvg6k
May  8 11:27:30.641: INFO: Got endpoints: latency-svc-xvg6k [1.030214491s]
May  8 11:27:30.689: INFO: Created: latency-svc-bb4qn
May  8 11:27:30.714: INFO: Got endpoints: latency-svc-bb4qn [1.03646532s]
May  8 11:27:30.768: INFO: Created: latency-svc-8nrvx
May  8 11:27:30.811: INFO: Got endpoints: latency-svc-8nrvx [1.043952787s]
May  8 11:27:30.813: INFO: Created: latency-svc-j4nz5
May  8 11:27:30.825: INFO: Got endpoints: latency-svc-j4nz5 [985.255161ms]
May  8 11:27:30.863: INFO: Created: latency-svc-tbd5k
May  8 11:27:30.936: INFO: Got endpoints: latency-svc-tbd5k [987.781098ms]
May  8 11:27:30.978: INFO: Created: latency-svc-pzc84
May  8 11:27:30.994: INFO: Got endpoints: latency-svc-pzc84 [998.017515ms]
May  8 11:27:31.074: INFO: Created: latency-svc-2q49s
May  8 11:27:31.091: INFO: Got endpoints: latency-svc-2q49s [1.013576917s]
May  8 11:27:31.139: INFO: Created: latency-svc-wslf9
May  8 11:27:31.162: INFO: Got endpoints: latency-svc-wslf9 [1.045978143s]
May  8 11:27:31.204: INFO: Created: latency-svc-fr8zf
May  8 11:27:31.209: INFO: Got endpoints: latency-svc-fr8zf [1.015894677s]
May  8 11:27:31.234: INFO: Created: latency-svc-vfptq
May  8 11:27:31.246: INFO: Got endpoints: latency-svc-vfptq [995.074889ms]
May  8 11:27:31.279: INFO: Created: latency-svc-p6pl5
May  8 11:27:31.349: INFO: Got endpoints: latency-svc-p6pl5 [1.017631824s]
May  8 11:27:31.380: INFO: Created: latency-svc-j7fzr
May  8 11:27:31.404: INFO: Got endpoints: latency-svc-j7fzr [1.039555546s]
May  8 11:27:31.486: INFO: Created: latency-svc-vnlsc
May  8 11:27:31.530: INFO: Got endpoints: latency-svc-vnlsc [1.11705372s]
May  8 11:27:31.530: INFO: Created: latency-svc-rlhnz
May  8 11:27:31.672: INFO: Got endpoints: latency-svc-rlhnz [1.200465166s]
May  8 11:27:31.758: INFO: Created: latency-svc-rb7mm
May  8 11:27:31.911: INFO: Got endpoints: latency-svc-rb7mm [1.281121144s]
May  8 11:27:31.945: INFO: Created: latency-svc-skktq
May  8 11:27:32.103: INFO: Got endpoints: latency-svc-skktq [1.461677178s]
May  8 11:27:32.106: INFO: Created: latency-svc-7n76q
May  8 11:27:32.130: INFO: Got endpoints: latency-svc-7n76q [1.416124247s]
May  8 11:27:32.412: INFO: Created: latency-svc-9mbkm
May  8 11:27:32.431: INFO: Got endpoints: latency-svc-9mbkm [1.620096155s]
May  8 11:27:32.473: INFO: Created: latency-svc-mvf7v
May  8 11:27:32.666: INFO: Got endpoints: latency-svc-mvf7v [1.841060066s]
May  8 11:27:32.684: INFO: Created: latency-svc-jhqvr
May  8 11:27:32.688: INFO: Got endpoints: latency-svc-jhqvr [1.752615319s]
May  8 11:27:33.393: INFO: Created: latency-svc-q8vn9
May  8 11:27:33.614: INFO: Got endpoints: latency-svc-q8vn9 [2.620125794s]
May  8 11:27:33.704: INFO: Created: latency-svc-mxww8
May  8 11:27:33.884: INFO: Got endpoints: latency-svc-mxww8 [2.792963945s]
May  8 11:27:34.076: INFO: Created: latency-svc-rgc6b
May  8 11:27:34.147: INFO: Got endpoints: latency-svc-rgc6b [2.985375194s]
May  8 11:27:34.217: INFO: Created: latency-svc-h5b6h
May  8 11:27:34.221: INFO: Got endpoints: latency-svc-h5b6h [3.012269431s]
May  8 11:27:34.348: INFO: Created: latency-svc-b2phm
May  8 11:27:34.374: INFO: Got endpoints: latency-svc-b2phm [3.12778995s]
May  8 11:27:34.375: INFO: Created: latency-svc-htdmd
May  8 11:27:34.392: INFO: Got endpoints: latency-svc-htdmd [3.042972509s]
May  8 11:27:34.435: INFO: Created: latency-svc-qss72
May  8 11:27:34.486: INFO: Got endpoints: latency-svc-qss72 [3.08246169s]
May  8 11:27:34.500: INFO: Created: latency-svc-rf52t
May  8 11:27:34.540: INFO: Got endpoints: latency-svc-rf52t [3.009556244s]
May  8 11:27:34.585: INFO: Created: latency-svc-g2h5s
May  8 11:27:34.636: INFO: Got endpoints: latency-svc-g2h5s [2.964144426s]
May  8 11:27:34.664: INFO: Created: latency-svc-zzgql
May  8 11:27:34.678: INFO: Got endpoints: latency-svc-zzgql [2.766555878s]
May  8 11:27:34.711: INFO: Created: latency-svc-rlrxg
May  8 11:27:34.726: INFO: Got endpoints: latency-svc-rlrxg [2.623365234s]
May  8 11:27:34.802: INFO: Created: latency-svc-7dbzc
May  8 11:27:34.816: INFO: Got endpoints: latency-svc-7dbzc [2.685938724s]
May  8 11:27:34.855: INFO: Created: latency-svc-gdszx
May  8 11:27:34.912: INFO: Got endpoints: latency-svc-gdszx [2.480752787s]
May  8 11:27:34.922: INFO: Created: latency-svc-62d8g
May  8 11:27:34.938: INFO: Got endpoints: latency-svc-62d8g [2.271772741s]
May  8 11:27:34.964: INFO: Created: latency-svc-kx6gs
May  8 11:27:34.973: INFO: Got endpoints: latency-svc-kx6gs [2.284498242s]
May  8 11:27:35.085: INFO: Created: latency-svc-9md4c
May  8 11:27:35.089: INFO: Got endpoints: latency-svc-9md4c [1.475161486s]
May  8 11:27:35.235: INFO: Created: latency-svc-jttsr
May  8 11:27:35.239: INFO: Got endpoints: latency-svc-jttsr [1.355015985s]
May  8 11:27:35.269: INFO: Created: latency-svc-dl8hg
May  8 11:27:35.287: INFO: Got endpoints: latency-svc-dl8hg [1.139516769s]
May  8 11:27:35.306: INFO: Created: latency-svc-5lgrd
May  8 11:27:35.316: INFO: Got endpoints: latency-svc-5lgrd [1.095037607s]
May  8 11:27:35.375: INFO: Created: latency-svc-xhlzg
May  8 11:27:35.376: INFO: Got endpoints: latency-svc-xhlzg [1.001724603s]
May  8 11:27:35.395: INFO: Created: latency-svc-bc627
May  8 11:27:35.419: INFO: Got endpoints: latency-svc-bc627 [1.026530825s]
May  8 11:27:35.449: INFO: Created: latency-svc-l754b
May  8 11:27:35.462: INFO: Got endpoints: latency-svc-l754b [975.469765ms]
May  8 11:27:35.523: INFO: Created: latency-svc-5vwfj
May  8 11:27:35.557: INFO: Got endpoints: latency-svc-5vwfj [1.017894635s]
May  8 11:27:35.558: INFO: Created: latency-svc-jn9p8
May  8 11:27:35.581: INFO: Got endpoints: latency-svc-jn9p8 [944.252737ms]
May  8 11:27:35.667: INFO: Created: latency-svc-j6md6
May  8 11:27:35.670: INFO: Got endpoints: latency-svc-j6md6 [991.918751ms]
May  8 11:27:35.743: INFO: Created: latency-svc-8kcwc
May  8 11:27:35.757: INFO: Got endpoints: latency-svc-8kcwc [1.03065536s]
May  8 11:27:35.840: INFO: Created: latency-svc-n5glm
May  8 11:27:35.861: INFO: Got endpoints: latency-svc-n5glm [1.045215442s]
May  8 11:27:35.911: INFO: Created: latency-svc-nxcm6
May  8 11:27:35.977: INFO: Got endpoints: latency-svc-nxcm6 [1.06551444s]
May  8 11:27:36.025: INFO: Created: latency-svc-5vpjd
May  8 11:27:36.059: INFO: Got endpoints: latency-svc-5vpjd [1.120927574s]
May  8 11:27:36.187: INFO: Created: latency-svc-pn68g
May  8 11:27:36.257: INFO: Got endpoints: latency-svc-pn68g [1.283823176s]
May  8 11:27:36.354: INFO: Created: latency-svc-kx8pq
May  8 11:27:36.403: INFO: Got endpoints: latency-svc-kx8pq [1.313953265s]
May  8 11:27:36.405: INFO: Created: latency-svc-th86v
May  8 11:27:36.523: INFO: Got endpoints: latency-svc-th86v [1.284497301s]
May  8 11:27:36.554: INFO: Created: latency-svc-6ktnp
May  8 11:27:36.578: INFO: Got endpoints: latency-svc-6ktnp [1.29088246s]
May  8 11:27:36.620: INFO: Created: latency-svc-9r7gx
May  8 11:27:36.666: INFO: Got endpoints: latency-svc-9r7gx [1.349521201s]
May  8 11:27:36.692: INFO: Created: latency-svc-2dglx
May  8 11:27:36.702: INFO: Got endpoints: latency-svc-2dglx [1.326345025s]
May  8 11:27:36.752: INFO: Created: latency-svc-nm5jf
May  8 11:27:36.798: INFO: Got endpoints: latency-svc-nm5jf [1.378693277s]
May  8 11:27:36.824: INFO: Created: latency-svc-ck525
May  8 11:27:36.841: INFO: Got endpoints: latency-svc-ck525 [1.378838528s]
May  8 11:27:36.860: INFO: Created: latency-svc-dwtkj
May  8 11:27:36.877: INFO: Got endpoints: latency-svc-dwtkj [1.319506934s]
May  8 11:27:36.956: INFO: Created: latency-svc-bmcld
May  8 11:27:36.967: INFO: Got endpoints: latency-svc-bmcld [1.386665349s]
May  8 11:27:36.986: INFO: Created: latency-svc-cvdxl
May  8 11:27:36.998: INFO: Got endpoints: latency-svc-cvdxl [1.327657742s]
May  8 11:27:37.016: INFO: Created: latency-svc-h7sz2
May  8 11:27:37.039: INFO: Got endpoints: latency-svc-h7sz2 [1.282639335s]
May  8 11:27:37.109: INFO: Created: latency-svc-fbbgn
May  8 11:27:37.112: INFO: Got endpoints: latency-svc-fbbgn [1.25080341s]
May  8 11:27:37.338: INFO: Created: latency-svc-8vnfh
May  8 11:27:37.346: INFO: Got endpoints: latency-svc-8vnfh [1.369105888s]
May  8 11:27:37.398: INFO: Created: latency-svc-4lq26
May  8 11:27:37.419: INFO: Got endpoints: latency-svc-4lq26 [1.359958726s]
May  8 11:27:37.493: INFO: Created: latency-svc-hdfjg
May  8 11:27:37.497: INFO: Got endpoints: latency-svc-hdfjg [1.239999804s]
May  8 11:27:37.544: INFO: Created: latency-svc-xb6hx
May  8 11:27:37.556: INFO: Got endpoints: latency-svc-xb6hx [1.153226339s]
May  8 11:27:37.592: INFO: Created: latency-svc-5khkv
May  8 11:27:37.625: INFO: Got endpoints: latency-svc-5khkv [1.101377851s]
May  8 11:27:37.647: INFO: Created: latency-svc-lg7xk
May  8 11:27:37.678: INFO: Got endpoints: latency-svc-lg7xk [1.10002853s]
May  8 11:27:37.784: INFO: Created: latency-svc-nqvfz
May  8 11:27:37.815: INFO: Got endpoints: latency-svc-nqvfz [1.148510088s]
May  8 11:27:38.307: INFO: Created: latency-svc-gqhxq
May  8 11:27:38.337: INFO: Got endpoints: latency-svc-gqhxq [1.634889679s]
May  8 11:27:38.691: INFO: Created: latency-svc-mrrf8
May  8 11:27:38.734: INFO: Got endpoints: latency-svc-mrrf8 [1.935938198s]
May  8 11:27:38.973: INFO: Created: latency-svc-5j927
May  8 11:27:38.985: INFO: Got endpoints: latency-svc-5j927 [2.144219942s]
May  8 11:27:39.141: INFO: Created: latency-svc-fjxhm
May  8 11:27:39.200: INFO: Got endpoints: latency-svc-fjxhm [2.322748929s]
May  8 11:27:39.204: INFO: Created: latency-svc-9rl45
May  8 11:27:39.236: INFO: Got endpoints: latency-svc-9rl45 [2.268765684s]
May  8 11:27:39.398: INFO: Created: latency-svc-q2tjg
May  8 11:27:39.480: INFO: Got endpoints: latency-svc-q2tjg [2.482733246s]
May  8 11:27:39.495: INFO: Created: latency-svc-2gnrz
May  8 11:27:39.525: INFO: Got endpoints: latency-svc-2gnrz [2.48560155s]
May  8 11:27:39.549: INFO: Created: latency-svc-qpp2t
May  8 11:27:39.573: INFO: Got endpoints: latency-svc-qpp2t [2.461168671s]
May  8 11:27:39.633: INFO: Created: latency-svc-k7p65
May  8 11:27:39.664: INFO: Got endpoints: latency-svc-k7p65 [2.317083123s]
May  8 11:27:39.695: INFO: Created: latency-svc-8n8gc
May  8 11:27:39.706: INFO: Got endpoints: latency-svc-8n8gc [2.286787425s]
May  8 11:27:39.722: INFO: Created: latency-svc-4l7pb
May  8 11:27:39.804: INFO: Got endpoints: latency-svc-4l7pb [2.307538491s]
May  8 11:27:39.812: INFO: Created: latency-svc-xkq2j
May  8 11:27:39.826: INFO: Got endpoints: latency-svc-xkq2j [2.269811913s]
May  8 11:27:39.843: INFO: Created: latency-svc-7phx6
May  8 11:27:39.857: INFO: Got endpoints: latency-svc-7phx6 [2.231730294s]
May  8 11:27:39.879: INFO: Created: latency-svc-tx2fx
May  8 11:27:39.893: INFO: Got endpoints: latency-svc-tx2fx [2.215196442s]
May  8 11:27:39.947: INFO: Created: latency-svc-wkzlh
May  8 11:27:39.957: INFO: Got endpoints: latency-svc-wkzlh [2.142706109s]
May  8 11:27:39.987: INFO: Created: latency-svc-9rjjb
May  8 11:27:40.002: INFO: Got endpoints: latency-svc-9rjjb [1.664796706s]
May  8 11:27:40.028: INFO: Created: latency-svc-rjtzw
May  8 11:27:40.038: INFO: Got endpoints: latency-svc-rjtzw [1.304057628s]
May  8 11:27:40.091: INFO: Created: latency-svc-8wz9g
May  8 11:27:40.107: INFO: Got endpoints: latency-svc-8wz9g [1.121998456s]
May  8 11:27:40.137: INFO: Created: latency-svc-lvjs4
May  8 11:27:40.153: INFO: Got endpoints: latency-svc-lvjs4 [953.260081ms]
May  8 11:27:40.180: INFO: Created: latency-svc-dkk2l
May  8 11:27:40.189: INFO: Got endpoints: latency-svc-dkk2l [953.145222ms]
May  8 11:27:40.241: INFO: Created: latency-svc-88fj4
May  8 11:27:40.274: INFO: Got endpoints: latency-svc-88fj4 [793.785056ms]
May  8 11:27:40.311: INFO: Created: latency-svc-6kltx
May  8 11:27:40.321: INFO: Got endpoints: latency-svc-6kltx [796.137909ms]
May  8 11:27:40.341: INFO: Created: latency-svc-wkpgj
May  8 11:27:40.379: INFO: Got endpoints: latency-svc-wkpgj [805.690366ms]
May  8 11:27:40.402: INFO: Created: latency-svc-x9jkt
May  8 11:27:40.443: INFO: Got endpoints: latency-svc-x9jkt [778.884976ms]
May  8 11:27:40.529: INFO: Created: latency-svc-vgqks
May  8 11:27:40.532: INFO: Got endpoints: latency-svc-vgqks [826.501403ms]
May  8 11:27:40.563: INFO: Created: latency-svc-8pf2g
May  8 11:27:40.574: INFO: Got endpoints: latency-svc-8pf2g [769.940813ms]
May  8 11:27:40.593: INFO: Created: latency-svc-c6h89
May  8 11:27:40.623: INFO: Got endpoints: latency-svc-c6h89 [796.750839ms]
May  8 11:27:40.702: INFO: Created: latency-svc-m45q8
May  8 11:27:40.707: INFO: Got endpoints: latency-svc-m45q8 [850.73127ms]
May  8 11:27:40.749: INFO: Created: latency-svc-jmv45
May  8 11:27:40.767: INFO: Got endpoints: latency-svc-jmv45 [874.157239ms]
May  8 11:27:40.848: INFO: Created: latency-svc-cgqbj
May  8 11:27:40.849: INFO: Got endpoints: latency-svc-cgqbj [891.831698ms]
May  8 11:27:40.893: INFO: Created: latency-svc-pfwb6
May  8 11:27:40.924: INFO: Got endpoints: latency-svc-pfwb6 [921.807953ms]
May  8 11:27:41.014: INFO: Created: latency-svc-xnpsv
May  8 11:27:41.016: INFO: Got endpoints: latency-svc-xnpsv [978.500867ms]
May  8 11:27:41.079: INFO: Created: latency-svc-n6g7x
May  8 11:27:41.100: INFO: Got endpoints: latency-svc-n6g7x [992.636255ms]
May  8 11:27:41.176: INFO: Created: latency-svc-nmht4
May  8 11:27:41.201: INFO: Got endpoints: latency-svc-nmht4 [1.047610609s]
May  8 11:27:41.422: INFO: Created: latency-svc-j9js8
May  8 11:27:41.506: INFO: Got endpoints: latency-svc-j9js8 [1.316515009s]
May  8 11:27:41.547: INFO: Created: latency-svc-2p2xl
May  8 11:27:41.573: INFO: Got endpoints: latency-svc-2p2xl [1.299069446s]
May  8 11:27:41.660: INFO: Created: latency-svc-pwxl5
May  8 11:27:41.673: INFO: Got endpoints: latency-svc-pwxl5 [1.351780681s]
May  8 11:27:41.703: INFO: Created: latency-svc-565r5
May  8 11:27:41.924: INFO: Got endpoints: latency-svc-565r5 [1.544579057s]
May  8 11:27:41.967: INFO: Created: latency-svc-fllf7
May  8 11:27:41.985: INFO: Got endpoints: latency-svc-fllf7 [1.542234903s]
May  8 11:27:42.021: INFO: Created: latency-svc-8wk8d
May  8 11:27:42.061: INFO: Got endpoints: latency-svc-8wk8d [1.52857201s]
May  8 11:27:42.087: INFO: Created: latency-svc-h5s5f
May  8 11:27:42.096: INFO: Got endpoints: latency-svc-h5s5f [1.521922323s]
May  8 11:27:42.124: INFO: Created: latency-svc-dbsmt
May  8 11:27:42.159: INFO: Got endpoints: latency-svc-dbsmt [1.535427377s]
May  8 11:27:42.217: INFO: Created: latency-svc-m9rl6
May  8 11:27:42.243: INFO: Got endpoints: latency-svc-m9rl6 [1.535891978s]
May  8 11:27:42.292: INFO: Created: latency-svc-wn452
May  8 11:27:42.307: INFO: Got endpoints: latency-svc-wn452 [1.539866185s]
May  8 11:27:42.307: INFO: Latencies: [353.451902ms 399.663963ms 501.752251ms 544.898281ms 645.438332ms 769.940813ms 778.884976ms 793.785056ms 796.137909ms 796.750839ms 805.690366ms 826.501403ms 850.73127ms 874.157239ms 891.831698ms 921.807953ms 931.416047ms 944.252737ms 953.145222ms 953.260081ms 975.469765ms 978.500867ms 978.650193ms 982.011216ms 985.255161ms 987.781098ms 991.918751ms 992.636255ms 995.074889ms 998.017515ms 1.001724603s 1.008825001s 1.013576917s 1.015894677s 1.017454373s 1.017631824s 1.017894635s 1.018340855s 1.023170822s 1.026530825s 1.029285772s 1.030214491s 1.03065536s 1.03646532s 1.036523643s 1.039555546s 1.042208616s 1.043952787s 1.045215442s 1.045978143s 1.04648879s 1.047610609s 1.054030976s 1.06551444s 1.06568114s 1.068146748s 1.081837287s 1.085131899s 1.094996513s 1.095037607s 1.095265287s 1.096516141s 1.097328645s 1.0991046s 1.10002853s 1.101377851s 1.11587379s 1.116446911s 1.11705372s 1.118296221s 1.120927574s 1.121998456s 1.139516769s 1.143353114s 1.148510088s 1.149957201s 1.149982999s 1.152675096s 1.153226339s 1.176418036s 1.18783883s 1.19008541s 1.200305522s 1.200465166s 1.204158407s 1.216110245s 1.221593123s 1.22357023s 1.232099411s 1.239999804s 1.241773217s 1.25080341s 1.256227033s 1.281121144s 1.282639335s 1.283823176s 1.284497301s 1.29088246s 1.293610524s 1.299069446s 1.304057628s 1.308948569s 1.313953265s 1.316120435s 1.316515009s 1.319506934s 1.326345025s 1.327657742s 1.333585152s 1.337917996s 1.341132706s 1.349521201s 1.351609518s 1.351780681s 1.355015985s 1.359958726s 1.369105888s 1.374602039s 1.378693277s 1.378838528s 1.3830936s 1.386665349s 1.401381422s 1.416124247s 1.43995716s 1.461677178s 1.475161486s 1.521922323s 1.528524233s 1.52857201s 1.535010085s 1.535427377s 1.535891978s 1.539866185s 1.540410376s 1.542234903s 1.54267555s 1.544579057s 1.578370454s 1.582136631s 1.620096155s 1.628036397s 1.634889679s 1.64415724s 1.664796706s 1.689179244s 1.752615319s 1.759210977s 1.760622156s 1.801579077s 1.811189127s 1.815632991s 1.825705338s 1.826577269s 1.831927457s 1.841060066s 1.863700937s 1.868988656s 1.878041707s 1.878894437s 1.891694302s 1.903869156s 1.921834535s 1.92911285s 1.935938198s 1.949674584s 1.954430485s 1.956230965s 1.968549311s 1.995750829s 2.059921189s 2.074652332s 2.142706109s 2.144219942s 2.215196442s 2.231730294s 2.268765684s 2.269811913s 2.271772741s 2.284498242s 2.286787425s 2.307538491s 2.317083123s 2.322748929s 2.461168671s 2.480752787s 2.482733246s 2.48560155s 2.620125794s 2.623365234s 2.685938724s 2.766555878s 2.792963945s 2.964144426s 2.985375194s 3.009556244s 3.012269431s 3.042972509s 3.08246169s 3.12778995s]
May  8 11:27:42.307: INFO: 50 %ile: 1.304057628s
May  8 11:27:42.307: INFO: 90 %ile: 2.286787425s
May  8 11:27:42.307: INFO: 99 %ile: 3.08246169s
May  8 11:27:42.307: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:27:42.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9231" for this suite.

• [SLOW TEST:24.714 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":146,"skipped":2400,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:27:42.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
May  8 11:27:42.434: INFO: Waiting up to 5m0s for pod "pod-39a4d7e7-be66-4ec1-8e48-26501509e070" in namespace "emptydir-7986" to be "Succeeded or Failed"
May  8 11:27:42.452: INFO: Pod "pod-39a4d7e7-be66-4ec1-8e48-26501509e070": Phase="Pending", Reason="", readiness=false. Elapsed: 17.804351ms
May  8 11:27:44.457: INFO: Pod "pod-39a4d7e7-be66-4ec1-8e48-26501509e070": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022044625s
May  8 11:27:46.461: INFO: Pod "pod-39a4d7e7-be66-4ec1-8e48-26501509e070": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02637042s
STEP: Saw pod success
May  8 11:27:46.461: INFO: Pod "pod-39a4d7e7-be66-4ec1-8e48-26501509e070" satisfied condition "Succeeded or Failed"
May  8 11:27:46.464: INFO: Trying to get logs from node kali-worker pod pod-39a4d7e7-be66-4ec1-8e48-26501509e070 container test-container: 
STEP: delete the pod
May  8 11:27:46.482: INFO: Waiting for pod pod-39a4d7e7-be66-4ec1-8e48-26501509e070 to disappear
May  8 11:27:46.531: INFO: Pod pod-39a4d7e7-be66-4ec1-8e48-26501509e070 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:27:46.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7986" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2404,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:27:46.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  8 11:27:52.350: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:27:52.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9505" for this suite.

• [SLOW TEST:5.985 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2414,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:27:52.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  8 11:27:53.388: INFO: PodSpec: initContainers in spec.initContainers
May  8 11:28:50.535: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7320a0e6-3fa3-45d7-8ccb-410203532606", GenerateName:"", Namespace:"init-container-1228", SelfLink:"/api/v1/namespaces/init-container-1228/pods/pod-init-7320a0e6-3fa3-45d7-8ccb-410203532606", UID:"1c718749-08fb-41ae-b2c8-d583f5bf8d47", ResourceVersion:"2574600", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724534073, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"388179747"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00350a700), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00350a720)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00350a740), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00350a760)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-826nr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002312580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-826nr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-826nr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-826nr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00555e918), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000794cb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00555e9a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00555e9c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00555e9c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00555e9cc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534073, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534073, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534073, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534073, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.18", PodIP:"10.244.1.14", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.14"}}, StartTime:(*v1.Time)(0xc00350a780), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000794d90)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000794e00)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://9089fc3d8b67df48be343125477da5c3aa4db577d009fffd7feb92dba907b001", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00350a7c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00350a7a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00555ea4f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:28:50.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1228" for this suite.

• [SLOW TEST:58.020 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":149,"skipped":2434,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:28:50.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-aab9c8e1-46cc-4734-8024-618fcd338110
STEP: Creating a pod to test consume secrets
May  8 11:28:50.657: INFO: Waiting up to 5m0s for pod "pod-secrets-ff993b19-7215-403c-97fc-4d28a2ac0786" in namespace "secrets-9293" to be "Succeeded or Failed"
May  8 11:28:50.666: INFO: Pod "pod-secrets-ff993b19-7215-403c-97fc-4d28a2ac0786": Phase="Pending", Reason="", readiness=false. Elapsed: 9.284358ms
May  8 11:28:52.670: INFO: Pod "pod-secrets-ff993b19-7215-403c-97fc-4d28a2ac0786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013763936s
May  8 11:28:54.674: INFO: Pod "pod-secrets-ff993b19-7215-403c-97fc-4d28a2ac0786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017794602s
STEP: Saw pod success
May  8 11:28:54.674: INFO: Pod "pod-secrets-ff993b19-7215-403c-97fc-4d28a2ac0786" satisfied condition "Succeeded or Failed"
May  8 11:28:54.677: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-ff993b19-7215-403c-97fc-4d28a2ac0786 container secret-volume-test: 
STEP: delete the pod
May  8 11:28:54.935: INFO: Waiting for pod pod-secrets-ff993b19-7215-403c-97fc-4d28a2ac0786 to disappear
May  8 11:28:54.941: INFO: Pod pod-secrets-ff993b19-7215-403c-97fc-4d28a2ac0786 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:28:54.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9293" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2439,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:28:55.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:28:55.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2" in namespace "projected-2982" to be "Succeeded or Failed"
May  8 11:28:55.252: INFO: Pod "downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.868307ms
May  8 11:28:57.314: INFO: Pod "downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076027727s
May  8 11:28:59.471: INFO: Pod "downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2": Phase="Running", Reason="", readiness=true. Elapsed: 4.233816364s
May  8 11:29:01.536: INFO: Pod "downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.298265109s
STEP: Saw pod success
May  8 11:29:01.536: INFO: Pod "downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2" satisfied condition "Succeeded or Failed"
May  8 11:29:01.539: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2 container client-container: 
STEP: delete the pod
May  8 11:29:01.726: INFO: Waiting for pod downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2 to disappear
May  8 11:29:01.744: INFO: Pod downwardapi-volume-df5a3aba-3311-4279-9d73-6afb9e94b9f2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:29:01.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2982" for this suite.

• [SLOW TEST:6.648 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2445,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:29:01.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0508 11:29:02.871083       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  8 11:29:02.871: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:29:02.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7842" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":152,"skipped":2492,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:29:02.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May  8 11:29:03.094: INFO: Waiting up to 5m0s for pod "pod-e5e12713-b8ad-42d3-8f43-18afba210410" in namespace "emptydir-8233" to be "Succeeded or Failed"
May  8 11:29:03.114: INFO: Pod "pod-e5e12713-b8ad-42d3-8f43-18afba210410": Phase="Pending", Reason="", readiness=false. Elapsed: 20.417853ms
May  8 11:29:05.118: INFO: Pod "pod-e5e12713-b8ad-42d3-8f43-18afba210410": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024664232s
May  8 11:29:07.146: INFO: Pod "pod-e5e12713-b8ad-42d3-8f43-18afba210410": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052187121s
STEP: Saw pod success
May  8 11:29:07.146: INFO: Pod "pod-e5e12713-b8ad-42d3-8f43-18afba210410" satisfied condition "Succeeded or Failed"
May  8 11:29:07.149: INFO: Trying to get logs from node kali-worker pod pod-e5e12713-b8ad-42d3-8f43-18afba210410 container test-container: 
STEP: delete the pod
May  8 11:29:07.216: INFO: Waiting for pod pod-e5e12713-b8ad-42d3-8f43-18afba210410 to disappear
May  8 11:29:07.233: INFO: Pod pod-e5e12713-b8ad-42d3-8f43-18afba210410 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:29:07.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8233" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2527,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:29:07.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-651c115c-6444-4af4-8e8f-d7ac1f2fa32b
STEP: Creating a pod to test consume configMaps
May  8 11:29:07.493: INFO: Waiting up to 5m0s for pod "pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1" in namespace "configmap-167" to be "Succeeded or Failed"
May  8 11:29:07.532: INFO: Pod "pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.702469ms
May  8 11:29:09.536: INFO: Pod "pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042945054s
May  8 11:29:11.541: INFO: Pod "pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1": Phase="Running", Reason="", readiness=true. Elapsed: 4.047948751s
May  8 11:29:13.544: INFO: Pod "pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051298418s
STEP: Saw pod success
May  8 11:29:13.544: INFO: Pod "pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1" satisfied condition "Succeeded or Failed"
May  8 11:29:13.547: INFO: Trying to get logs from node kali-worker pod pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1 container configmap-volume-test: 
STEP: delete the pod
May  8 11:29:13.586: INFO: Waiting for pod pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1 to disappear
May  8 11:29:13.611: INFO: Pod pod-configmaps-574493c2-a687-4355-b757-0debe16dc9a1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:29:13.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-167" for this suite.

• [SLOW TEST:6.377 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2536,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:29:13.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
May  8 11:29:13.749: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix872335410/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:29:13.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2944" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":155,"skipped":2537,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:29:13.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3434
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May  8 11:29:13.993: INFO: Found 0 stateful pods, waiting for 3
May  8 11:29:24.039: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:29:24.039: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:29:24.039: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May  8 11:29:34.015: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:29:34.015: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:29:34.015: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:29:34.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3434 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  8 11:29:34.509: INFO: stderr: "I0508 11:29:34.231634    2979 log.go:172] (0xc00003b1e0) (0xc0006b5540) Create stream\nI0508 11:29:34.231702    2979 log.go:172] (0xc00003b1e0) (0xc0006b5540) Stream added, broadcasting: 1\nI0508 11:29:34.236663    2979 log.go:172] (0xc00003b1e0) Reply frame received for 1\nI0508 11:29:34.236720    2979 log.go:172] (0xc00003b1e0) (0xc000978000) Create stream\nI0508 11:29:34.236738    2979 log.go:172] (0xc00003b1e0) (0xc000978000) Stream added, broadcasting: 3\nI0508 11:29:34.238420    2979 log.go:172] (0xc00003b1e0) Reply frame received for 3\nI0508 11:29:34.238451    2979 log.go:172] (0xc00003b1e0) (0xc0006b55e0) Create stream\nI0508 11:29:34.238463    2979 log.go:172] (0xc00003b1e0) (0xc0006b55e0) Stream added, broadcasting: 5\nI0508 11:29:34.239852    2979 log.go:172] (0xc00003b1e0) Reply frame received for 5\nI0508 11:29:34.331992    2979 log.go:172] (0xc00003b1e0) Data frame received for 5\nI0508 11:29:34.332039    2979 log.go:172] (0xc0006b55e0) (5) Data frame handling\nI0508 11:29:34.332071    2979 log.go:172] (0xc0006b55e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:29:34.502012    2979 log.go:172] (0xc00003b1e0) Data frame received for 5\nI0508 11:29:34.502052    2979 log.go:172] (0xc0006b55e0) (5) Data frame handling\nI0508 11:29:34.502087    2979 log.go:172] (0xc00003b1e0) Data frame received for 3\nI0508 11:29:34.502096    2979 log.go:172] (0xc000978000) (3) Data frame handling\nI0508 11:29:34.502149    2979 log.go:172] (0xc000978000) (3) Data frame sent\nI0508 11:29:34.502175    2979 log.go:172] (0xc00003b1e0) Data frame received for 3\nI0508 11:29:34.502180    2979 log.go:172] (0xc000978000) (3) Data frame handling\nI0508 11:29:34.504383    2979 log.go:172] (0xc00003b1e0) Data frame received for 1\nI0508 11:29:34.504419    2979 log.go:172] (0xc0006b5540) (1) Data frame handling\nI0508 11:29:34.504445    2979 log.go:172] (0xc0006b5540) (1) Data frame sent\nI0508 11:29:34.504470    2979 log.go:172] (0xc00003b1e0) (0xc0006b5540) Stream removed, broadcasting: 1\nI0508 11:29:34.504587    2979 log.go:172] (0xc00003b1e0) Go away received\nI0508 11:29:34.504996    2979 log.go:172] (0xc00003b1e0) (0xc0006b5540) Stream removed, broadcasting: 1\nI0508 11:29:34.505022    2979 log.go:172] (0xc00003b1e0) (0xc000978000) Stream removed, broadcasting: 3\nI0508 11:29:34.505034    2979 log.go:172] (0xc00003b1e0) (0xc0006b55e0) Stream removed, broadcasting: 5\n"
May  8 11:29:34.510: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  8 11:29:34.510: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May  8 11:29:44.541: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
May  8 11:29:54.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3434 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:29:54.912: INFO: stderr: "I0508 11:29:54.760316    3000 log.go:172] (0xc000afa000) (0xc00082f220) Create stream\nI0508 11:29:54.760376    3000 log.go:172] (0xc000afa000) (0xc00082f220) Stream added, broadcasting: 1\nI0508 11:29:54.762126    3000 log.go:172] (0xc000afa000) Reply frame received for 1\nI0508 11:29:54.762163    3000 log.go:172] (0xc000afa000) (0xc00099a000) Create stream\nI0508 11:29:54.762188    3000 log.go:172] (0xc000afa000) (0xc00099a000) Stream added, broadcasting: 3\nI0508 11:29:54.763551    3000 log.go:172] (0xc000afa000) Reply frame received for 3\nI0508 11:29:54.763604    3000 log.go:172] (0xc000afa000) (0xc00099a0a0) Create stream\nI0508 11:29:54.763618    3000 log.go:172] (0xc000afa000) (0xc00099a0a0) Stream added, broadcasting: 5\nI0508 11:29:54.764643    3000 log.go:172] (0xc000afa000) Reply frame received for 5\nI0508 11:29:54.905691    3000 log.go:172] (0xc000afa000) Data frame received for 3\nI0508 11:29:54.905726    3000 log.go:172] (0xc00099a000) (3) Data frame handling\nI0508 11:29:54.905745    3000 log.go:172] (0xc00099a000) (3) Data frame sent\nI0508 11:29:54.905785    3000 log.go:172] (0xc000afa000) Data frame received for 5\nI0508 11:29:54.905796    3000 log.go:172] (0xc00099a0a0) (5) Data frame handling\nI0508 11:29:54.905813    3000 log.go:172] (0xc00099a0a0) (5) Data frame sent\nI0508 11:29:54.905852    3000 log.go:172] (0xc000afa000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 11:29:54.905877    3000 log.go:172] (0xc00099a0a0) (5) Data frame handling\nI0508 11:29:54.905905    3000 log.go:172] (0xc000afa000) Data frame received for 3\nI0508 11:29:54.905922    3000 log.go:172] (0xc00099a000) (3) Data frame handling\nI0508 11:29:54.906991    3000 log.go:172] (0xc000afa000) Data frame received for 1\nI0508 11:29:54.907015    3000 log.go:172] (0xc00082f220) (1) Data frame handling\nI0508 11:29:54.907030    3000 log.go:172] (0xc00082f220) (1) Data frame sent\nI0508 11:29:54.907056    3000 log.go:172] (0xc000afa000) (0xc00082f220) Stream removed, broadcasting: 1\nI0508 11:29:54.907076    3000 log.go:172] (0xc000afa000) Go away received\nI0508 11:29:54.907493    3000 log.go:172] (0xc000afa000) (0xc00082f220) Stream removed, broadcasting: 1\nI0508 11:29:54.907524    3000 log.go:172] (0xc000afa000) (0xc00099a000) Stream removed, broadcasting: 3\nI0508 11:29:54.907540    3000 log.go:172] (0xc000afa000) (0xc00099a0a0) Stream removed, broadcasting: 5\n"
May  8 11:29:54.912: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  8 11:29:54.912: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  8 11:30:04.959: INFO: Waiting for StatefulSet statefulset-3434/ss2 to complete update
May  8 11:30:04.959: INFO: Waiting for Pod statefulset-3434/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  8 11:30:04.959: INFO: Waiting for Pod statefulset-3434/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  8 11:30:14.965: INFO: Waiting for StatefulSet statefulset-3434/ss2 to complete update
May  8 11:30:14.965: INFO: Waiting for Pod statefulset-3434/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  8 11:30:24.967: INFO: Waiting for StatefulSet statefulset-3434/ss2 to complete update
STEP: Rolling back to a previous revision
May  8 11:30:34.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3434 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  8 11:30:35.209: INFO: stderr: "I0508 11:30:35.095839    3021 log.go:172] (0xc00003a0b0) (0xc000b12000) Create stream\nI0508 11:30:35.095935    3021 log.go:172] (0xc00003a0b0) (0xc000b12000) Stream added, broadcasting: 1\nI0508 11:30:35.098954    3021 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0508 11:30:35.099080    3021 log.go:172] (0xc00003a0b0) (0xc00067f720) Create stream\nI0508 11:30:35.099108    3021 log.go:172] (0xc00003a0b0) (0xc00067f720) Stream added, broadcasting: 3\nI0508 11:30:35.100167    3021 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0508 11:30:35.100212    3021 log.go:172] (0xc00003a0b0) (0xc000b120a0) Create stream\nI0508 11:30:35.100230    3021 log.go:172] (0xc00003a0b0) (0xc000b120a0) Stream added, broadcasting: 5\nI0508 11:30:35.101312    3021 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0508 11:30:35.163967    3021 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0508 11:30:35.163991    3021 log.go:172] (0xc000b120a0) (5) Data frame handling\nI0508 11:30:35.164004    3021 log.go:172] (0xc000b120a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:30:35.202878    3021 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0508 11:30:35.202913    3021 log.go:172] (0xc00067f720) (3) Data frame handling\nI0508 11:30:35.202936    3021 log.go:172] (0xc00067f720) (3) Data frame sent\nI0508 11:30:35.203021    3021 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0508 11:30:35.203039    3021 log.go:172] (0xc00067f720) (3) Data frame handling\nI0508 11:30:35.203072    3021 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0508 11:30:35.203094    3021 log.go:172] (0xc000b120a0) (5) Data frame handling\nI0508 11:30:35.204990    3021 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0508 11:30:35.205009    3021 log.go:172] (0xc000b12000) (1) Data frame handling\nI0508 11:30:35.205020    3021 log.go:172] (0xc000b12000) (1) Data frame sent\nI0508 11:30:35.205042    3021 log.go:172] (0xc00003a0b0) (0xc000b12000) Stream removed, broadcasting: 1\nI0508 11:30:35.205093    3021 log.go:172] (0xc00003a0b0) Go away received\nI0508 11:30:35.205574    3021 log.go:172] (0xc00003a0b0) (0xc000b12000) Stream removed, broadcasting: 1\nI0508 11:30:35.205590    3021 log.go:172] (0xc00003a0b0) (0xc00067f720) Stream removed, broadcasting: 3\nI0508 11:30:35.205597    3021 log.go:172] (0xc00003a0b0) (0xc000b120a0) Stream removed, broadcasting: 5\n"
May  8 11:30:35.209: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  8 11:30:35.209: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  8 11:30:45.241: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
May  8 11:30:55.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3434 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:30:55.531: INFO: stderr: "I0508 11:30:55.452490    3043 log.go:172] (0xc000a57c30) (0xc000abeb40) Create stream\nI0508 11:30:55.452555    3043 log.go:172] (0xc000a57c30) (0xc000abeb40) Stream added, broadcasting: 1\nI0508 11:30:55.457084    3043 log.go:172] (0xc000a57c30) Reply frame received for 1\nI0508 11:30:55.457909    3043 log.go:172] (0xc000a57c30) (0xc000abe000) Create stream\nI0508 11:30:55.457950    3043 log.go:172] (0xc000a57c30) (0xc000abe000) Stream added, broadcasting: 3\nI0508 11:30:55.458939    3043 log.go:172] (0xc000a57c30) Reply frame received for 3\nI0508 11:30:55.458970    3043 log.go:172] (0xc000a57c30) (0xc0005e14a0) Create stream\nI0508 11:30:55.458997    3043 log.go:172] (0xc000a57c30) (0xc0005e14a0) Stream added, broadcasting: 5\nI0508 11:30:55.459958    3043 log.go:172] (0xc000a57c30) Reply frame received for 5\nI0508 11:30:55.524262    3043 log.go:172] (0xc000a57c30) Data frame received for 3\nI0508 11:30:55.524313    3043 log.go:172] (0xc000abe000) (3) Data frame handling\nI0508 11:30:55.524332    3043 log.go:172] (0xc000abe000) (3) Data frame sent\nI0508 11:30:55.524346    3043 log.go:172] (0xc000a57c30) Data frame received for 3\nI0508 11:30:55.524360    3043 log.go:172] (0xc000abe000) (3) Data frame handling\nI0508 11:30:55.524410    3043 log.go:172] (0xc000a57c30) Data frame received for 5\nI0508 11:30:55.524484    3043 log.go:172] (0xc0005e14a0) (5) Data frame handling\nI0508 11:30:55.524519    3043 log.go:172] (0xc0005e14a0) (5) Data frame sent\nI0508 11:30:55.524548    3043 log.go:172] (0xc000a57c30) Data frame received for 5\nI0508 11:30:55.524569    3043 log.go:172] (0xc0005e14a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 11:30:55.526953    3043 log.go:172] (0xc000a57c30) Data frame received for 1\nI0508 11:30:55.526984    3043 log.go:172] (0xc000abeb40) (1) Data frame handling\nI0508 11:30:55.527000    3043 log.go:172] (0xc000abeb40) (1) Data frame sent\nI0508 11:30:55.527023    3043 log.go:172] (0xc000a57c30) (0xc000abeb40) Stream removed, broadcasting: 1\nI0508 11:30:55.527050    3043 log.go:172] (0xc000a57c30) Go away received\nI0508 11:30:55.527422    3043 log.go:172] (0xc000a57c30) (0xc000abeb40) Stream removed, broadcasting: 1\nI0508 11:30:55.527435    3043 log.go:172] (0xc000a57c30) (0xc000abe000) Stream removed, broadcasting: 3\nI0508 11:30:55.527441    3043 log.go:172] (0xc000a57c30) (0xc0005e14a0) Stream removed, broadcasting: 5\n"
May  8 11:30:55.531: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  8 11:30:55.531: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  8 11:31:05.553: INFO: Waiting for StatefulSet statefulset-3434/ss2 to complete update
May  8 11:31:05.553: INFO: Waiting for Pod statefulset-3434/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  8 11:31:05.553: INFO: Waiting for Pod statefulset-3434/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  8 11:31:15.562: INFO: Waiting for StatefulSet statefulset-3434/ss2 to complete update
May  8 11:31:15.562: INFO: Waiting for Pod statefulset-3434/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  8 11:31:25.560: INFO: Waiting for StatefulSet statefulset-3434/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  8 11:31:35.562: INFO: Deleting all statefulset in ns statefulset-3434
May  8 11:31:35.565: INFO: Scaling statefulset ss2 to 0
May  8 11:31:55.614: INFO: Waiting for statefulset status.replicas updated to 0
May  8 11:31:55.616: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:31:55.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3434" for this suite.

• [SLOW TEST:161.802 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":156,"skipped":2540,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:31:55.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 11:31:56.068: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 11:31:58.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534316, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534316, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534316, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534315, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:32:01.190: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
May  8 11:32:01.263: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:01.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-227" for this suite.
STEP: Destroying namespace "webhook-227-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.888 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":157,"skipped":2543,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:01.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:18.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8288" for this suite.

• [SLOW TEST:17.450 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":158,"skipped":2552,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:18.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-5946/secret-test-38cc654f-2cb1-4ea7-b827-e4c9806160bc
STEP: Creating a pod to test consume secrets
May  8 11:32:19.048: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fb223e1-d8e5-49eb-8755-4d77137ab6d1" in namespace "secrets-5946" to be "Succeeded or Failed"
May  8 11:32:19.052: INFO: Pod "pod-configmaps-1fb223e1-d8e5-49eb-8755-4d77137ab6d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234435ms
May  8 11:32:21.056: INFO: Pod "pod-configmaps-1fb223e1-d8e5-49eb-8755-4d77137ab6d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007980841s
May  8 11:32:23.060: INFO: Pod "pod-configmaps-1fb223e1-d8e5-49eb-8755-4d77137ab6d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011773139s
STEP: Saw pod success
May  8 11:32:23.060: INFO: Pod "pod-configmaps-1fb223e1-d8e5-49eb-8755-4d77137ab6d1" satisfied condition "Succeeded or Failed"
May  8 11:32:23.063: INFO: Trying to get logs from node kali-worker pod pod-configmaps-1fb223e1-d8e5-49eb-8755-4d77137ab6d1 container env-test: 
STEP: delete the pod
May  8 11:32:23.114: INFO: Waiting for pod pod-configmaps-1fb223e1-d8e5-49eb-8755-4d77137ab6d1 to disappear
May  8 11:32:23.118: INFO: Pod pod-configmaps-1fb223e1-d8e5-49eb-8755-4d77137ab6d1 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:23.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5946" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2568,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:23.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-521595e7-1a03-4ffa-96f4-89b2b61b9e3e
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:27.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3374" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2578,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:27.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-5a2d1313-1bf0-4157-8286-17e36018216e
STEP: Creating a pod to test consume configMaps
May  8 11:32:27.381: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d9f29be-4791-499a-b326-ab7fafc34844" in namespace "configmap-6663" to be "Succeeded or Failed"
May  8 11:32:27.418: INFO: Pod "pod-configmaps-5d9f29be-4791-499a-b326-ab7fafc34844": Phase="Pending", Reason="", readiness=false. Elapsed: 37.476216ms
May  8 11:32:29.422: INFO: Pod "pod-configmaps-5d9f29be-4791-499a-b326-ab7fafc34844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040890385s
May  8 11:32:31.426: INFO: Pod "pod-configmaps-5d9f29be-4791-499a-b326-ab7fafc34844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045083737s
STEP: Saw pod success
May  8 11:32:31.426: INFO: Pod "pod-configmaps-5d9f29be-4791-499a-b326-ab7fafc34844" satisfied condition "Succeeded or Failed"
May  8 11:32:31.429: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5d9f29be-4791-499a-b326-ab7fafc34844 container configmap-volume-test: 
STEP: delete the pod
May  8 11:32:31.469: INFO: Waiting for pod pod-configmaps-5d9f29be-4791-499a-b326-ab7fafc34844 to disappear
May  8 11:32:31.478: INFO: Pod pod-configmaps-5d9f29be-4791-499a-b326-ab7fafc34844 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:31.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6663" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2591,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:31.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May  8 11:32:31.618: INFO: Waiting up to 5m0s for pod "pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5" in namespace "emptydir-997" to be "Succeeded or Failed"
May  8 11:32:31.635: INFO: Pod "pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.402806ms
May  8 11:32:33.641: INFO: Pod "pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022591546s
May  8 11:32:35.645: INFO: Pod "pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026867907s
May  8 11:32:37.649: INFO: Pod "pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031368153s
STEP: Saw pod success
May  8 11:32:37.649: INFO: Pod "pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5" satisfied condition "Succeeded or Failed"
May  8 11:32:37.652: INFO: Trying to get logs from node kali-worker2 pod pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5 container test-container: 
STEP: delete the pod
May  8 11:32:37.685: INFO: Waiting for pod pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5 to disappear
May  8 11:32:37.706: INFO: Pod pod-8773c186-f653-4be0-9aff-d3c45f3cf1c5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:37.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-997" for this suite.

• [SLOW TEST:6.250 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2632,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:37.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:32:37.921: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67dbdcf7-712f-4566-843c-5bab1eb22c07" in namespace "downward-api-4357" to be "Succeeded or Failed"
May  8 11:32:37.926: INFO: Pod "downwardapi-volume-67dbdcf7-712f-4566-843c-5bab1eb22c07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412887ms
May  8 11:32:39.955: INFO: Pod "downwardapi-volume-67dbdcf7-712f-4566-843c-5bab1eb22c07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034027126s
May  8 11:32:41.970: INFO: Pod "downwardapi-volume-67dbdcf7-712f-4566-843c-5bab1eb22c07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048982841s
STEP: Saw pod success
May  8 11:32:41.970: INFO: Pod "downwardapi-volume-67dbdcf7-712f-4566-843c-5bab1eb22c07" satisfied condition "Succeeded or Failed"
May  8 11:32:41.972: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-67dbdcf7-712f-4566-843c-5bab1eb22c07 container client-container: 
STEP: delete the pod
May  8 11:32:42.046: INFO: Waiting for pod downwardapi-volume-67dbdcf7-712f-4566-843c-5bab1eb22c07 to disappear
May  8 11:32:42.057: INFO: Pod downwardapi-volume-67dbdcf7-712f-4566-843c-5bab1eb22c07 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:42.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4357" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2645,"failed":0}
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:42.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-e0a0dcb5-17e1-4172-af6f-8cf849708a98
STEP: Creating secret with name secret-projected-all-test-volume-9c55760b-44e1-466d-ac01-84c209a9f672
STEP: Creating a pod to test Check all projections for projected volume plugin
May  8 11:32:42.316: INFO: Waiting up to 5m0s for pod "projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e" in namespace "projected-2928" to be "Succeeded or Failed"
May  8 11:32:42.430: INFO: Pod "projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e": Phase="Pending", Reason="", readiness=false. Elapsed: 113.150518ms
May  8 11:32:44.434: INFO: Pod "projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117559162s
May  8 11:32:46.570: INFO: Pod "projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e": Phase="Running", Reason="", readiness=true. Elapsed: 4.253665605s
May  8 11:32:48.574: INFO: Pod "projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257874814s
STEP: Saw pod success
May  8 11:32:48.574: INFO: Pod "projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e" satisfied condition "Succeeded or Failed"
May  8 11:32:48.578: INFO: Trying to get logs from node kali-worker2 pod projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e container projected-all-volume-test: 
STEP: delete the pod
May  8 11:32:48.659: INFO: Waiting for pod projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e to disappear
May  8 11:32:48.682: INFO: Pod projected-volume-3746be03-1e95-420b-b179-c3bd19d8556e no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:48.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2928" for this suite.

• [SLOW TEST:6.625 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2646,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:48.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
May  8 11:32:48.874: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3906 /api/v1/namespaces/watch-3906/configmaps/e2e-watch-test-label-changed 3c378731-dad2-4f14-9c6a-b3960eea5d68 2576102 0 2020-05-08 11:32:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-08 11:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  8 11:32:48.874: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3906 /api/v1/namespaces/watch-3906/configmaps/e2e-watch-test-label-changed 3c378731-dad2-4f14-9c6a-b3960eea5d68 2576103 0 2020-05-08 11:32:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-08 11:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May  8 11:32:48.874: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3906 /api/v1/namespaces/watch-3906/configmaps/e2e-watch-test-label-changed 3c378731-dad2-4f14-9c6a-b3960eea5d68 2576105 0 2020-05-08 11:32:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-08 11:32:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
May  8 11:32:58.913: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3906 /api/v1/namespaces/watch-3906/configmaps/e2e-watch-test-label-changed 3c378731-dad2-4f14-9c6a-b3960eea5d68 2576148 0 2020-05-08 11:32:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-08 11:32:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  8 11:32:58.914: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3906 /api/v1/namespaces/watch-3906/configmaps/e2e-watch-test-label-changed 3c378731-dad2-4f14-9c6a-b3960eea5d68 2576149 0 2020-05-08 11:32:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-08 11:32:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
May  8 11:32:58.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3906 /api/v1/namespaces/watch-3906/configmaps/e2e-watch-test-label-changed 3c378731-dad2-4f14-9c6a-b3960eea5d68 2576150 0 2020-05-08 11:32:48 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-08 11:32:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:32:58.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3906" for this suite.

• [SLOW TEST:10.231 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":165,"skipped":2664,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:32:58.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-8dd6878b-eca5-4a35-8815-c69f66a75415
STEP: Creating a pod to test consume secrets
May  8 11:32:59.024: INFO: Waiting up to 5m0s for pod "pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8" in namespace "secrets-9734" to be "Succeeded or Failed"
May  8 11:32:59.038: INFO: Pod "pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.735071ms
May  8 11:33:01.042: INFO: Pod "pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018310335s
May  8 11:33:03.046: INFO: Pod "pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8": Phase="Running", Reason="", readiness=true. Elapsed: 4.022880482s
May  8 11:33:05.051: INFO: Pod "pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027026634s
STEP: Saw pod success
May  8 11:33:05.051: INFO: Pod "pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8" satisfied condition "Succeeded or Failed"
May  8 11:33:05.054: INFO: Trying to get logs from node kali-worker pod pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8 container secret-env-test: 
STEP: delete the pod
May  8 11:33:05.198: INFO: Waiting for pod pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8 to disappear
May  8 11:33:05.213: INFO: Pod pod-secrets-625af649-88b8-4749-89e0-caf766b88cb8 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:33:05.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9734" for this suite.

• [SLOW TEST:6.298 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2699,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:33:05.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 11:33:07.130: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 11:33:09.141: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534387, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534387, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534387, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534386, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:33:12.171: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:33:12.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9969" for this suite.
STEP: Destroying namespace "webhook-9969-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.247 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":167,"skipped":2730,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:33:12.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5749
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May  8 11:33:12.555: INFO: Found 0 stateful pods, waiting for 3
May  8 11:33:22.560: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:33:22.560: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:33:22.560: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May  8 11:33:32.560: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:33:32.560: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:33:32.560: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May  8 11:33:32.585: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
May  8 11:33:42.637: INFO: Updating stateful set ss2
May  8 11:33:42.712: INFO: Waiting for Pod statefulset-5749/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  8 11:33:52.718: INFO: Waiting for Pod statefulset-5749/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
May  8 11:34:03.507: INFO: Found 2 stateful pods, waiting for 3
May  8 11:34:13.512: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:34:13.512: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:34:13.512: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
May  8 11:34:13.565: INFO: Updating stateful set ss2
May  8 11:34:13.633: INFO: Waiting for Pod statefulset-5749/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  8 11:34:23.673: INFO: Updating stateful set ss2
May  8 11:34:23.714: INFO: Waiting for StatefulSet statefulset-5749/ss2 to complete update
May  8 11:34:23.714: INFO: Waiting for Pod statefulset-5749/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  8 11:34:33.721: INFO: Deleting all statefulset in ns statefulset-5749
May  8 11:34:33.723: INFO: Scaling statefulset ss2 to 0
May  8 11:35:03.744: INFO: Waiting for statefulset status.replicas updated to 0
May  8 11:35:03.747: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:35:03.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5749" for this suite.

• [SLOW TEST:111.304 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":168,"skipped":2786,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:35:03.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:35:03.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1488'
May  8 11:35:06.929: INFO: stderr: ""
May  8 11:35:06.929: INFO: stdout: "replicationcontroller/agnhost-master created\n"
May  8 11:35:06.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1488'
May  8 11:35:07.216: INFO: stderr: ""
May  8 11:35:07.216: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May  8 11:35:08.221: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:35:08.221: INFO: Found 0 / 1
May  8 11:35:09.467: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:35:09.467: INFO: Found 0 / 1
May  8 11:35:10.242: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:35:10.242: INFO: Found 0 / 1
May  8 11:35:11.221: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:35:11.221: INFO: Found 1 / 1
May  8 11:35:11.221: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May  8 11:35:11.224: INFO: Selector matched 1 pods for map[app:agnhost]
May  8 11:35:11.224: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May  8 11:35:11.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-q9tgg --namespace=kubectl-1488'
May  8 11:35:11.337: INFO: stderr: ""
May  8 11:35:11.337: INFO: stdout: "Name:         agnhost-master-q9tgg\nNamespace:    kubectl-1488\nPriority:     0\nNode:         kali-worker2/172.17.0.18\nStart Time:   Fri, 08 May 2020 11:35:07 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.34\nIPs:\n  IP:           10.244.1.34\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://1954487f541962e2f28cc1075507d59c863143e6fa42e5e0084e25cf70204185\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 08 May 2020 11:35:09 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pmp46 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-pmp46:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-pmp46\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  5s    default-scheduler      Successfully assigned kubectl-1488/agnhost-master-q9tgg to kali-worker2\n  Normal  Pulled     3s    kubelet, kali-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s    kubelet, kali-worker2  Created container agnhost-master\n  Normal  Started    1s    kubelet, kali-worker2  Started container agnhost-master\n"
May  8 11:35:11.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1488'
May  8 11:35:11.470: INFO: stderr: ""
May  8 11:35:11.470: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-1488\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-q9tgg\n"
May  8 11:35:11.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1488'
May  8 11:35:11.575: INFO: stderr: ""
May  8 11:35:11.575: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-1488\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.100.238.216\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.34:6379\nSession Affinity:  None\nEvents:            \n"
May  8 11:35:11.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
May  8 11:35:11.726: INFO: stderr: ""
May  8 11:35:11.726: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Fri, 08 May 2020 11:35:10 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 08 May 2020 11:33:24 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 08 May 2020 11:33:24 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 08 May 2020 11:33:24 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 08 May 2020 11:33:24 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     9d\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     9d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      9d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         9d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         9d\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         9d\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
May  8 11:35:11.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-1488'
May  8 11:35:11.831: INFO: stderr: ""
May  8 11:35:11.831: INFO: stdout: "Name:         kubectl-1488\nLabels:       e2e-framework=kubectl\n              e2e-run=4f14be6b-7651-411f-a0bb-821a1da97ee2\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:35:11.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1488" for this suite.

• [SLOW TEST:8.065 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":169,"skipped":2787,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:35:11.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:35:11.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ec3143f-e706-476a-81c2-9cc15ca3124c" in namespace "downward-api-4901" to be "Succeeded or Failed"
May  8 11:35:11.956: INFO: Pod "downwardapi-volume-9ec3143f-e706-476a-81c2-9cc15ca3124c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.585244ms
May  8 11:35:14.011: INFO: Pod "downwardapi-volume-9ec3143f-e706-476a-81c2-9cc15ca3124c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065825566s
May  8 11:35:16.015: INFO: Pod "downwardapi-volume-9ec3143f-e706-476a-81c2-9cc15ca3124c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069929699s
STEP: Saw pod success
May  8 11:35:16.015: INFO: Pod "downwardapi-volume-9ec3143f-e706-476a-81c2-9cc15ca3124c" satisfied condition "Succeeded or Failed"
May  8 11:35:16.018: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9ec3143f-e706-476a-81c2-9cc15ca3124c container client-container: 
STEP: delete the pod
May  8 11:35:16.053: INFO: Waiting for pod downwardapi-volume-9ec3143f-e706-476a-81c2-9cc15ca3124c to disappear
May  8 11:35:16.073: INFO: Pod downwardapi-volume-9ec3143f-e706-476a-81c2-9cc15ca3124c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:35:16.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4901" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2791,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:35:16.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May  8 11:35:16.155: INFO: Waiting up to 5m0s for pod "pod-36b8010b-5571-4cd7-8881-7228a480811f" in namespace "emptydir-6210" to be "Succeeded or Failed"
May  8 11:35:16.159: INFO: Pod "pod-36b8010b-5571-4cd7-8881-7228a480811f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855944ms
May  8 11:35:18.370: INFO: Pod "pod-36b8010b-5571-4cd7-8881-7228a480811f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214877975s
May  8 11:35:20.375: INFO: Pod "pod-36b8010b-5571-4cd7-8881-7228a480811f": Phase="Running", Reason="", readiness=true. Elapsed: 4.219190483s
May  8 11:35:22.379: INFO: Pod "pod-36b8010b-5571-4cd7-8881-7228a480811f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223187038s
STEP: Saw pod success
May  8 11:35:22.379: INFO: Pod "pod-36b8010b-5571-4cd7-8881-7228a480811f" satisfied condition "Succeeded or Failed"
May  8 11:35:22.382: INFO: Trying to get logs from node kali-worker pod pod-36b8010b-5571-4cd7-8881-7228a480811f container test-container: 
STEP: delete the pod
May  8 11:35:22.411: INFO: Waiting for pod pod-36b8010b-5571-4cd7-8881-7228a480811f to disappear
May  8 11:35:22.435: INFO: Pod pod-36b8010b-5571-4cd7-8881-7228a480811f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:35:22.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6210" for this suite.

• [SLOW TEST:6.359 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2847,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:35:22.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-e03b625e-e092-43eb-8a79-08b2c4a9a166 in namespace container-probe-7824
May  8 11:35:26.534: INFO: Started pod liveness-e03b625e-e092-43eb-8a79-08b2c4a9a166 in namespace container-probe-7824
STEP: checking the pod's current state and verifying that restartCount is present
May  8 11:35:26.538: INFO: Initial restart count of pod liveness-e03b625e-e092-43eb-8a79-08b2c4a9a166 is 0
May  8 11:35:40.711: INFO: Restart count of pod container-probe-7824/liveness-e03b625e-e092-43eb-8a79-08b2c4a9a166 is now 1 (14.173544818s elapsed)
May  8 11:36:00.762: INFO: Restart count of pod container-probe-7824/liveness-e03b625e-e092-43eb-8a79-08b2c4a9a166 is now 2 (34.224449975s elapsed)
May  8 11:36:20.851: INFO: Restart count of pod container-probe-7824/liveness-e03b625e-e092-43eb-8a79-08b2c4a9a166 is now 3 (54.313427422s elapsed)
May  8 11:36:40.967: INFO: Restart count of pod container-probe-7824/liveness-e03b625e-e092-43eb-8a79-08b2c4a9a166 is now 4 (1m14.429538085s elapsed)
May  8 11:37:53.451: INFO: Restart count of pod container-probe-7824/liveness-e03b625e-e092-43eb-8a79-08b2c4a9a166 is now 5 (2m26.913468857s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:37:53.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7824" for this suite.

• [SLOW TEST:151.397 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2878,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:37:53.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 11:37:54.887: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 11:37:56.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534674, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534674, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534675, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534674, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:37:58.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534674, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534674, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534675, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534674, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:38:01.938: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:01.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6184" for this suite.
STEP: Destroying namespace "webhook-6184-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.330 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":173,"skipped":2880,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:02.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-21c99ce9-fbf6-454f-8391-887eeedd1ca1
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:02.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9260" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":174,"skipped":2906,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:02.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:02.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4762" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":175,"skipped":3046,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:02.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:10.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1967" for this suite.

• [SLOW TEST:7.299 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":176,"skipped":3052,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:10.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
May  8 11:38:14.218: INFO: &Pod{ObjectMeta:{send-events-5234e4e0-6445-43d9-9b68-b6fab63bd16a  events-5386 /api/v1/namespaces/events-5386/pods/send-events-5234e4e0-6445-43d9-9b68-b6fab63bd16a ea3940fe-c765-4528-932c-794a7c68ce3b 2577718 0 2020-05-08 11:38:10 +0000 UTC   map[name:foo time:176257407] map[] [] []  [{e2e.test Update v1 2020-05-08 11:38:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:38:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 51 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dpdtm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dpdtm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dpdtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:38:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:38:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:38:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:38:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.237,StartTime:2020-05-08 11:38:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:38:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://7512ebf9d02e78084acb68a51248a50e8eca5c003741c66e948febcb52a56132,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
May  8 11:38:16.223: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
May  8 11:38:18.228: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:18.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5386" for this suite.

• [SLOW TEST:8.185 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":177,"skipped":3060,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:18.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
May  8 11:38:24.410: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1398 PodName:pod-sharedvolume-b435f88a-9b11-4ebd-b658-5a2b165c7552 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  8 11:38:24.410: INFO: >>> kubeConfig: /root/.kube/config
I0508 11:38:24.447074       7 log.go:172] (0xc002c858c0) (0xc001494280) Create stream
I0508 11:38:24.447106       7 log.go:172] (0xc002c858c0) (0xc001494280) Stream added, broadcasting: 1
I0508 11:38:24.449885       7 log.go:172] (0xc002c858c0) Reply frame received for 1
I0508 11:38:24.449923       7 log.go:172] (0xc002c858c0) (0xc000fd9220) Create stream
I0508 11:38:24.449936       7 log.go:172] (0xc002c858c0) (0xc000fd9220) Stream added, broadcasting: 3
I0508 11:38:24.450969       7 log.go:172] (0xc002c858c0) Reply frame received for 3
I0508 11:38:24.451000       7 log.go:172] (0xc002c858c0) (0xc00104f7c0) Create stream
I0508 11:38:24.451011       7 log.go:172] (0xc002c858c0) (0xc00104f7c0) Stream added, broadcasting: 5
I0508 11:38:24.451942       7 log.go:172] (0xc002c858c0) Reply frame received for 5
I0508 11:38:24.541894       7 log.go:172] (0xc002c858c0) Data frame received for 5
I0508 11:38:24.541945       7 log.go:172] (0xc002c858c0) Data frame received for 3
I0508 11:38:24.542007       7 log.go:172] (0xc000fd9220) (3) Data frame handling
I0508 11:38:24.542038       7 log.go:172] (0xc000fd9220) (3) Data frame sent
I0508 11:38:24.542059       7 log.go:172] (0xc002c858c0) Data frame received for 3
I0508 11:38:24.542081       7 log.go:172] (0xc000fd9220) (3) Data frame handling
I0508 11:38:24.542103       7 log.go:172] (0xc00104f7c0) (5) Data frame handling
I0508 11:38:24.543707       7 log.go:172] (0xc002c858c0) Data frame received for 1
I0508 11:38:24.543726       7 log.go:172] (0xc001494280) (1) Data frame handling
I0508 11:38:24.543737       7 log.go:172] (0xc001494280) (1) Data frame sent
I0508 11:38:24.543752       7 log.go:172] (0xc002c858c0) (0xc001494280) Stream removed, broadcasting: 1
I0508 11:38:24.543772       7 log.go:172] (0xc002c858c0) Go away received
I0508 11:38:24.543897       7 log.go:172] (0xc002c858c0) (0xc001494280) Stream removed, broadcasting: 1
I0508 11:38:24.543922       7 log.go:172] (0xc002c858c0) (0xc000fd9220) Stream removed, broadcasting: 3
I0508 11:38:24.543931       7 log.go:172] (0xc002c858c0) (0xc00104f7c0) Stream removed, broadcasting: 5
May  8 11:38:24.543: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:24.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1398" for this suite.

• [SLOW TEST:6.289 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":178,"skipped":3061,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:24.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-b1c73744-569c-41d6-8e1a-d605519cd3b8
STEP: Creating a pod to test consume secrets
May  8 11:38:24.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-366ec4a0-1ee8-498f-826d-a2924cc6ec59" in namespace "projected-1661" to be "Succeeded or Failed"
May  8 11:38:24.674: INFO: Pod "pod-projected-secrets-366ec4a0-1ee8-498f-826d-a2924cc6ec59": Phase="Pending", Reason="", readiness=false. Elapsed: 24.276657ms
May  8 11:38:26.678: INFO: Pod "pod-projected-secrets-366ec4a0-1ee8-498f-826d-a2924cc6ec59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028692082s
May  8 11:38:28.681: INFO: Pod "pod-projected-secrets-366ec4a0-1ee8-498f-826d-a2924cc6ec59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03222593s
STEP: Saw pod success
May  8 11:38:28.682: INFO: Pod "pod-projected-secrets-366ec4a0-1ee8-498f-826d-a2924cc6ec59" satisfied condition "Succeeded or Failed"
May  8 11:38:28.684: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-366ec4a0-1ee8-498f-826d-a2924cc6ec59 container projected-secret-volume-test: 
STEP: delete the pod
May  8 11:38:28.728: INFO: Waiting for pod pod-projected-secrets-366ec4a0-1ee8-498f-826d-a2924cc6ec59 to disappear
May  8 11:38:28.738: INFO: Pod pod-projected-secrets-366ec4a0-1ee8-498f-826d-a2924cc6ec59 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:28.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1661" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3080,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:28.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0508 11:38:43.663256       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  8 11:38:43.663: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:43.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1275" for this suite.

• [SLOW TEST:15.447 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":180,"skipped":3107,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:44.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  8 11:38:44.739: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:38:54.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1509" for this suite.

• [SLOW TEST:10.176 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":181,"skipped":3122,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:38:54.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  8 11:38:54.530: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  8 11:38:54.598: INFO: Waiting for terminating namespaces to be deleted...
May  8 11:38:54.601: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  8 11:38:54.607: INFO: pod-init-b17bef27-0cd9-4f9b-9795-59872d34e47e from init-container-1509 started at 2020-05-08 11:38:44 +0000 UTC (1 container statuses recorded)
May  8 11:38:54.607: INFO: 	Container run1 ready: true, restart count 0
May  8 11:38:54.607: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 11:38:54.607: INFO: 	Container kindnet-cni ready: true, restart count 1
May  8 11:38:54.607: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 11:38:54.607: INFO: 	Container kube-proxy ready: true, restart count 0
May  8 11:38:54.607: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  8 11:38:54.623: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 11:38:54.624: INFO: 	Container kindnet-cni ready: true, restart count 0
May  8 11:38:54.624: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 11:38:54.624: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8535c564-8868-4858-92b2-69aad68c071f 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-8535c564-8868-4858-92b2-69aad68c071f off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8535c564-8868-4858-92b2-69aad68c071f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:39:10.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4830" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:16.573 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":182,"skipped":3135,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:39:10.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-285a6fbe-677b-4a8f-b3c2-cf436068a2de
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-285a6fbe-677b-4a8f-b3c2-cf436068a2de
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:39:17.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2505" for this suite.

• [SLOW TEST:6.367 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3137,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:39:17.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  8 11:39:22.080: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:39:22.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3027" for this suite.

• [SLOW TEST:5.562 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3167,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:39:22.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  8 11:39:29.778: INFO: Successfully updated pod "annotationupdate99cfb337-2b5e-4b14-a555-8311cd9e06d0"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:39:31.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4222" for this suite.

• [SLOW TEST:8.927 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3218,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:39:31.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:39:31.927: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
May  8 11:39:32.002: INFO: Number of nodes with available pods: 0
May  8 11:39:32.002: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
May  8 11:39:32.042: INFO: Number of nodes with available pods: 0
May  8 11:39:32.042: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:33.047: INFO: Number of nodes with available pods: 0
May  8 11:39:33.047: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:34.187: INFO: Number of nodes with available pods: 0
May  8 11:39:34.187: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:35.110: INFO: Number of nodes with available pods: 0
May  8 11:39:35.110: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:36.046: INFO: Number of nodes with available pods: 0
May  8 11:39:36.046: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:37.210: INFO: Number of nodes with available pods: 0
May  8 11:39:37.210: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:38.103: INFO: Number of nodes with available pods: 1
May  8 11:39:38.103: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
May  8 11:39:38.147: INFO: Number of nodes with available pods: 1
May  8 11:39:38.147: INFO: Number of running nodes: 0, number of available pods: 1
May  8 11:39:39.151: INFO: Number of nodes with available pods: 0
May  8 11:39:39.151: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
May  8 11:39:39.192: INFO: Number of nodes with available pods: 0
May  8 11:39:39.192: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:40.196: INFO: Number of nodes with available pods: 0
May  8 11:39:40.196: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:41.196: INFO: Number of nodes with available pods: 0
May  8 11:39:41.196: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:42.197: INFO: Number of nodes with available pods: 0
May  8 11:39:42.197: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:43.196: INFO: Number of nodes with available pods: 0
May  8 11:39:43.196: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:44.196: INFO: Number of nodes with available pods: 0
May  8 11:39:44.196: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:45.301: INFO: Number of nodes with available pods: 0
May  8 11:39:45.301: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:46.197: INFO: Number of nodes with available pods: 0
May  8 11:39:46.197: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:47.196: INFO: Number of nodes with available pods: 0
May  8 11:39:47.196: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:48.196: INFO: Number of nodes with available pods: 0
May  8 11:39:48.196: INFO: Node kali-worker is running more than one daemon pod
May  8 11:39:49.197: INFO: Number of nodes with available pods: 1
May  8 11:39:49.197: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7597, will wait for the garbage collector to delete the pods
May  8 11:39:49.262: INFO: Deleting DaemonSet.extensions daemon-set took: 5.066073ms
May  8 11:39:49.662: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.21714ms
May  8 11:40:03.766: INFO: Number of nodes with available pods: 0
May  8 11:40:03.766: INFO: Number of running nodes: 0, number of available pods: 0
May  8 11:40:03.768: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7597/daemonsets","resourceVersion":"2578532"},"items":null}

May  8 11:40:03.770: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7597/pods","resourceVersion":"2578532"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:03.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7597" for this suite.

• [SLOW TEST:32.002 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":186,"skipped":3256,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:03.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8666.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8666.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8666.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8666.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8666.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8666.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  8 11:40:11.989: INFO: DNS probes using dns-8666/dns-test-000ad894-26f3-4f7d-b92a-d3353d12d766 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:12.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8666" for this suite.

• [SLOW TEST:9.045 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":187,"skipped":3270,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:12.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:40:13.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May  8 11:40:17.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4954 create -f -'
May  8 11:40:23.222: INFO: stderr: ""
May  8 11:40:23.222: INFO: stdout: "e2e-test-crd-publish-openapi-2671-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  8 11:40:23.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4954 delete e2e-test-crd-publish-openapi-2671-crds test-foo'
May  8 11:40:23.352: INFO: stderr: ""
May  8 11:40:23.352: INFO: stdout: "e2e-test-crd-publish-openapi-2671-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May  8 11:40:23.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4954 apply -f -'
May  8 11:40:23.623: INFO: stderr: ""
May  8 11:40:23.623: INFO: stdout: "e2e-test-crd-publish-openapi-2671-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  8 11:40:23.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4954 delete e2e-test-crd-publish-openapi-2671-crds test-foo'
May  8 11:40:23.735: INFO: stderr: ""
May  8 11:40:23.735: INFO: stdout: "e2e-test-crd-publish-openapi-2671-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May  8 11:40:23.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4954 create -f -'
May  8 11:40:23.971: INFO: rc: 1
May  8 11:40:23.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4954 apply -f -'
May  8 11:40:24.196: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May  8 11:40:24.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4954 create -f -'
May  8 11:40:24.441: INFO: rc: 1
May  8 11:40:24.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4954 apply -f -'
May  8 11:40:24.656: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May  8 11:40:24.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2671-crds'
May  8 11:40:24.871: INFO: stderr: ""
May  8 11:40:24.871: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2671-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May  8 11:40:24.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2671-crds.metadata'
May  8 11:40:25.177: INFO: stderr: ""
May  8 11:40:25.177: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2671-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May  8 11:40:25.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2671-crds.spec'
May  8 11:40:25.410: INFO: stderr: ""
May  8 11:40:25.410: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2671-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May  8 11:40:25.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2671-crds.spec.bars'
May  8 11:40:25.654: INFO: stderr: ""
May  8 11:40:25.654: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2671-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May  8 11:40:25.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2671-crds.spec.bars2'
May  8 11:40:25.914: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:27.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4954" for this suite.

• [SLOW TEST:15.043 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":188,"skipped":3301,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:27.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May  8 11:40:28.020: INFO: Waiting up to 5m0s for pod "pod-125eb189-a7da-484d-8497-fd400b42dee8" in namespace "emptydir-1054" to be "Succeeded or Failed"
May  8 11:40:28.024: INFO: Pod "pod-125eb189-a7da-484d-8497-fd400b42dee8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255659ms
May  8 11:40:30.029: INFO: Pod "pod-125eb189-a7da-484d-8497-fd400b42dee8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009062759s
May  8 11:40:32.091: INFO: Pod "pod-125eb189-a7da-484d-8497-fd400b42dee8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071317269s
May  8 11:40:34.096: INFO: Pod "pod-125eb189-a7da-484d-8497-fd400b42dee8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075539014s
STEP: Saw pod success
May  8 11:40:34.096: INFO: Pod "pod-125eb189-a7da-484d-8497-fd400b42dee8" satisfied condition "Succeeded or Failed"
May  8 11:40:34.099: INFO: Trying to get logs from node kali-worker pod pod-125eb189-a7da-484d-8497-fd400b42dee8 container test-container: 
STEP: delete the pod
May  8 11:40:34.156: INFO: Waiting for pod pod-125eb189-a7da-484d-8497-fd400b42dee8 to disappear
May  8 11:40:34.223: INFO: Pod pod-125eb189-a7da-484d-8497-fd400b42dee8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:34.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1054" for this suite.

• [SLOW TEST:6.337 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3303,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:34.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:40:34.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:38.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5130" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3323,"failed":0}

------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:38.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  8 11:40:38.601: INFO: Waiting up to 5m0s for pod "downward-api-8e0e3419-0f0a-4a7d-a0ab-73e3cac983eb" in namespace "downward-api-1686" to be "Succeeded or Failed"
May  8 11:40:38.630: INFO: Pod "downward-api-8e0e3419-0f0a-4a7d-a0ab-73e3cac983eb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.188642ms
May  8 11:40:40.635: INFO: Pod "downward-api-8e0e3419-0f0a-4a7d-a0ab-73e3cac983eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033033165s
May  8 11:40:42.639: INFO: Pod "downward-api-8e0e3419-0f0a-4a7d-a0ab-73e3cac983eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037184124s
STEP: Saw pod success
May  8 11:40:42.639: INFO: Pod "downward-api-8e0e3419-0f0a-4a7d-a0ab-73e3cac983eb" satisfied condition "Succeeded or Failed"
May  8 11:40:42.643: INFO: Trying to get logs from node kali-worker pod downward-api-8e0e3419-0f0a-4a7d-a0ab-73e3cac983eb container dapi-container: 
STEP: delete the pod
May  8 11:40:42.707: INFO: Waiting for pod downward-api-8e0e3419-0f0a-4a7d-a0ab-73e3cac983eb to disappear
May  8 11:40:42.719: INFO: Pod downward-api-8e0e3419-0f0a-4a7d-a0ab-73e3cac983eb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:42.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1686" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3323,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:42.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-0f97b6eb-5b28-43f6-8ae7-ae80cb329b70
STEP: Creating a pod to test consume secrets
May  8 11:40:43.103: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5db5b29a-e761-4c49-a941-ee257cf41ba7" in namespace "projected-6122" to be "Succeeded or Failed"
May  8 11:40:43.110: INFO: Pod "pod-projected-secrets-5db5b29a-e761-4c49-a941-ee257cf41ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071309ms
May  8 11:40:45.113: INFO: Pod "pod-projected-secrets-5db5b29a-e761-4c49-a941-ee257cf41ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009677652s
May  8 11:40:47.117: INFO: Pod "pod-projected-secrets-5db5b29a-e761-4c49-a941-ee257cf41ba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013106436s
STEP: Saw pod success
May  8 11:40:47.117: INFO: Pod "pod-projected-secrets-5db5b29a-e761-4c49-a941-ee257cf41ba7" satisfied condition "Succeeded or Failed"
May  8 11:40:47.120: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-5db5b29a-e761-4c49-a941-ee257cf41ba7 container projected-secret-volume-test: 
STEP: delete the pod
May  8 11:40:47.177: INFO: Waiting for pod pod-projected-secrets-5db5b29a-e761-4c49-a941-ee257cf41ba7 to disappear
May  8 11:40:47.180: INFO: Pod pod-projected-secrets-5db5b29a-e761-4c49-a941-ee257cf41ba7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:47.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6122" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3325,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:47.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:40:47.441: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:48.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3190" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":193,"skipped":3349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:48.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-9815
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9815 to expose endpoints map[]
May  8 11:40:48.894: INFO: Get endpoints failed (9.593635ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
May  8 11:40:49.898: INFO: successfully validated that service multi-endpoint-test in namespace services-9815 exposes endpoints map[] (1.013897012s elapsed)
STEP: Creating pod pod1 in namespace services-9815
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9815 to expose endpoints map[pod1:[100]]
May  8 11:40:53.033: INFO: successfully validated that service multi-endpoint-test in namespace services-9815 exposes endpoints map[pod1:[100]] (3.126640328s elapsed)
STEP: Creating pod pod2 in namespace services-9815
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9815 to expose endpoints map[pod1:[100] pod2:[101]]
May  8 11:40:56.149: INFO: successfully validated that service multi-endpoint-test in namespace services-9815 exposes endpoints map[pod1:[100] pod2:[101]] (3.110617513s elapsed)
STEP: Deleting pod pod1 in namespace services-9815
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9815 to expose endpoints map[pod2:[101]]
May  8 11:40:57.238: INFO: successfully validated that service multi-endpoint-test in namespace services-9815 exposes endpoints map[pod2:[101]] (1.04263489s elapsed)
STEP: Deleting pod pod2 in namespace services-9815
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9815 to expose endpoints map[]
May  8 11:40:58.283: INFO: successfully validated that service multi-endpoint-test in namespace services-9815 exposes endpoints map[] (1.037919966s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:40:58.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9815" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:9.826 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":194,"skipped":3398,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:40:58.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-568f67e5-4005-4db2-ba4c-9001eb92094c
STEP: Creating a pod to test consume configMaps
May  8 11:40:58.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f0a80168-245d-4a05-b5a7-3736bf0fd62d" in namespace "projected-3364" to be "Succeeded or Failed"
May  8 11:40:58.677: INFO: Pod "pod-projected-configmaps-f0a80168-245d-4a05-b5a7-3736bf0fd62d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.859577ms
May  8 11:41:00.682: INFO: Pod "pod-projected-configmaps-f0a80168-245d-4a05-b5a7-3736bf0fd62d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008227806s
May  8 11:41:02.684: INFO: Pod "pod-projected-configmaps-f0a80168-245d-4a05-b5a7-3736bf0fd62d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010971845s
STEP: Saw pod success
May  8 11:41:02.684: INFO: Pod "pod-projected-configmaps-f0a80168-245d-4a05-b5a7-3736bf0fd62d" satisfied condition "Succeeded or Failed"
May  8 11:41:02.686: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-f0a80168-245d-4a05-b5a7-3736bf0fd62d container projected-configmap-volume-test: 
STEP: delete the pod
May  8 11:41:02.703: INFO: Waiting for pod pod-projected-configmaps-f0a80168-245d-4a05-b5a7-3736bf0fd62d to disappear
May  8 11:41:02.716: INFO: Pod pod-projected-configmaps-f0a80168-245d-4a05-b5a7-3736bf0fd62d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:41:02.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3364" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3400,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:41:02.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
May  8 11:41:02.831: INFO: Waiting up to 5m0s for pod "var-expansion-f3195093-a1be-4730-a4dd-ba1b7fb4fb99" in namespace "var-expansion-2105" to be "Succeeded or Failed"
May  8 11:41:03.020: INFO: Pod "var-expansion-f3195093-a1be-4730-a4dd-ba1b7fb4fb99": Phase="Pending", Reason="", readiness=false. Elapsed: 189.584712ms
May  8 11:41:05.025: INFO: Pod "var-expansion-f3195093-a1be-4730-a4dd-ba1b7fb4fb99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194349052s
May  8 11:41:07.030: INFO: Pod "var-expansion-f3195093-a1be-4730-a4dd-ba1b7fb4fb99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198866133s
STEP: Saw pod success
May  8 11:41:07.030: INFO: Pod "var-expansion-f3195093-a1be-4730-a4dd-ba1b7fb4fb99" satisfied condition "Succeeded or Failed"
May  8 11:41:07.033: INFO: Trying to get logs from node kali-worker pod var-expansion-f3195093-a1be-4730-a4dd-ba1b7fb4fb99 container dapi-container: 
STEP: delete the pod
May  8 11:41:07.148: INFO: Waiting for pod var-expansion-f3195093-a1be-4730-a4dd-ba1b7fb4fb99 to disappear
May  8 11:41:07.151: INFO: Pod var-expansion-f3195093-a1be-4730-a4dd-ba1b7fb4fb99 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:41:07.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2105" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3433,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:41:07.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-8980/configmap-test-90e56747-f900-4cb9-a198-bd0b417aa08e
STEP: Creating a pod to test consume configMaps
May  8 11:41:07.282: INFO: Waiting up to 5m0s for pod "pod-configmaps-53d4d8b8-41b9-4172-863f-45fcba993892" in namespace "configmap-8980" to be "Succeeded or Failed"
May  8 11:41:07.284: INFO: Pod "pod-configmaps-53d4d8b8-41b9-4172-863f-45fcba993892": Phase="Pending", Reason="", readiness=false. Elapsed: 1.730438ms
May  8 11:41:09.451: INFO: Pod "pod-configmaps-53d4d8b8-41b9-4172-863f-45fcba993892": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168610516s
May  8 11:41:11.456: INFO: Pod "pod-configmaps-53d4d8b8-41b9-4172-863f-45fcba993892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173346047s
STEP: Saw pod success
May  8 11:41:11.456: INFO: Pod "pod-configmaps-53d4d8b8-41b9-4172-863f-45fcba993892" satisfied condition "Succeeded or Failed"
May  8 11:41:11.459: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-53d4d8b8-41b9-4172-863f-45fcba993892 container env-test: 
STEP: delete the pod
May  8 11:41:11.542: INFO: Waiting for pod pod-configmaps-53d4d8b8-41b9-4172-863f-45fcba993892 to disappear
May  8 11:41:11.564: INFO: Pod pod-configmaps-53d4d8b8-41b9-4172-863f-45fcba993892 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:41:11.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8980" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3446,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:41:11.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3827.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3827.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3827.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3827.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3827.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3827.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3827.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3827.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3827.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3827.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 161.154.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.154.161_udp@PTR;check="$$(dig +tcp +noall +answer +search 161.154.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.154.161_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3827.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3827.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3827.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3827.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3827.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3827.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3827.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3827.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3827.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3827.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3827.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 161.154.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.154.161_udp@PTR;check="$$(dig +tcp +noall +answer +search 161.154.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.154.161_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  8 11:41:17.824: INFO: Unable to read wheezy_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:17.827: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:17.829: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:17.832: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:17.859: INFO: Unable to read jessie_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:17.862: INFO: Unable to read jessie_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:17.865: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:17.868: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:17.888: INFO: Lookups using dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea failed for: [wheezy_udp@dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_udp@dns-test-service.dns-3827.svc.cluster.local jessie_tcp@dns-test-service.dns-3827.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local]

May  8 11:41:22.907: INFO: Unable to read wheezy_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:22.911: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:22.914: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:22.917: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:22.936: INFO: Unable to read jessie_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:22.938: INFO: Unable to read jessie_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:22.941: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:22.944: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:22.996: INFO: Lookups using dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea failed for: [wheezy_udp@dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_udp@dns-test-service.dns-3827.svc.cluster.local jessie_tcp@dns-test-service.dns-3827.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local]

May  8 11:41:27.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:27.898: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:27.902: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:27.905: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:27.925: INFO: Unable to read jessie_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:27.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:27.931: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:27.934: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:27.951: INFO: Lookups using dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea failed for: [wheezy_udp@dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_udp@dns-test-service.dns-3827.svc.cluster.local jessie_tcp@dns-test-service.dns-3827.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local]

May  8 11:41:32.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:32.898: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:32.902: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:32.906: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:32.926: INFO: Unable to read jessie_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:32.929: INFO: Unable to read jessie_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:32.932: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:32.935: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:32.952: INFO: Lookups using dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea failed for: [wheezy_udp@dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_udp@dns-test-service.dns-3827.svc.cluster.local jessie_tcp@dns-test-service.dns-3827.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local]

May  8 11:41:37.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:37.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:37.901: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:37.903: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:37.927: INFO: Unable to read jessie_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:37.930: INFO: Unable to read jessie_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:37.934: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:37.937: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:37.957: INFO: Lookups using dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea failed for: [wheezy_udp@dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_udp@dns-test-service.dns-3827.svc.cluster.local jessie_tcp@dns-test-service.dns-3827.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local]

May  8 11:41:42.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:42.898: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:42.901: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:42.904: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:42.925: INFO: Unable to read jessie_udp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:42.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:42.931: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:42.934: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local from pod dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea: the server could not find the requested resource (get pods dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea)
May  8 11:41:42.954: INFO: Lookups using dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea failed for: [wheezy_udp@dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@dns-test-service.dns-3827.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_udp@dns-test-service.dns-3827.svc.cluster.local jessie_tcp@dns-test-service.dns-3827.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3827.svc.cluster.local]

May  8 11:41:47.952: INFO: DNS probes using dns-3827/dns-test-c1a4d86e-b567-4ec7-b872-245f72e76eea succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:41:48.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3827" for this suite.

• [SLOW TEST:37.152 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":198,"skipped":3449,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:41:48.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-2c716c99-1bd6-4bed-bbfa-d7ec759c67f2
STEP: Creating a pod to test consume secrets
May  8 11:41:49.020: INFO: Waiting up to 5m0s for pod "pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4" in namespace "secrets-7581" to be "Succeeded or Failed"
May  8 11:41:49.034: INFO: Pod "pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.085043ms
May  8 11:41:51.260: INFO: Pod "pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239810409s
May  8 11:41:53.263: INFO: Pod "pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243466716s
May  8 11:41:55.267: INFO: Pod "pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.247481888s
STEP: Saw pod success
May  8 11:41:55.267: INFO: Pod "pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4" satisfied condition "Succeeded or Failed"
May  8 11:41:55.271: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4 container secret-volume-test: 
STEP: delete the pod
May  8 11:41:55.291: INFO: Waiting for pod pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4 to disappear
May  8 11:41:55.296: INFO: Pod pod-secrets-64521b46-20fa-45df-95fd-bb801cfef7f4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:41:55.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7581" for this suite.

• [SLOW TEST:6.600 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3452,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:41:55.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-3892
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  8 11:41:55.423: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  8 11:41:55.523: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  8 11:41:57.527: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  8 11:41:59.527: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  8 11:42:01.527: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 11:42:03.528: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 11:42:05.527: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 11:42:07.527: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 11:42:09.527: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 11:42:11.527: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 11:42:13.527: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 11:42:15.527: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  8 11:42:15.531: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  8 11:42:17.534: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  8 11:42:19.536: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  8 11:42:25.664: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.3 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3892 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  8 11:42:25.664: INFO: >>> kubeConfig: /root/.kube/config
I0508 11:42:25.695441       7 log.go:172] (0xc0009940b0) (0xc00094c320) Create stream
I0508 11:42:25.695469       7 log.go:172] (0xc0009940b0) (0xc00094c320) Stream added, broadcasting: 1
I0508 11:42:25.697023       7 log.go:172] (0xc0009940b0) Reply frame received for 1
I0508 11:42:25.697055       7 log.go:172] (0xc0009940b0) (0xc00094cbe0) Create stream
I0508 11:42:25.697064       7 log.go:172] (0xc0009940b0) (0xc00094cbe0) Stream added, broadcasting: 3
I0508 11:42:25.698340       7 log.go:172] (0xc0009940b0) Reply frame received for 3
I0508 11:42:25.698373       7 log.go:172] (0xc0009940b0) (0xc00094d900) Create stream
I0508 11:42:25.698386       7 log.go:172] (0xc0009940b0) (0xc00094d900) Stream added, broadcasting: 5
I0508 11:42:25.699319       7 log.go:172] (0xc0009940b0) Reply frame received for 5
I0508 11:42:26.749536       7 log.go:172] (0xc0009940b0) Data frame received for 5
I0508 11:42:26.749572       7 log.go:172] (0xc00094d900) (5) Data frame handling
I0508 11:42:26.749600       7 log.go:172] (0xc0009940b0) Data frame received for 3
I0508 11:42:26.749614       7 log.go:172] (0xc00094cbe0) (3) Data frame handling
I0508 11:42:26.749631       7 log.go:172] (0xc00094cbe0) (3) Data frame sent
I0508 11:42:26.749728       7 log.go:172] (0xc0009940b0) Data frame received for 3
I0508 11:42:26.749839       7 log.go:172] (0xc00094cbe0) (3) Data frame handling
I0508 11:42:26.752086       7 log.go:172] (0xc0009940b0) Data frame received for 1
I0508 11:42:26.752116       7 log.go:172] (0xc00094c320) (1) Data frame handling
I0508 11:42:26.752135       7 log.go:172] (0xc00094c320) (1) Data frame sent
I0508 11:42:26.752154       7 log.go:172] (0xc0009940b0) (0xc00094c320) Stream removed, broadcasting: 1
I0508 11:42:26.752172       7 log.go:172] (0xc0009940b0) Go away received
I0508 11:42:26.752309       7 log.go:172] (0xc0009940b0) (0xc00094c320) Stream removed, broadcasting: 1
I0508 11:42:26.752351       7 log.go:172] (0xc0009940b0) (0xc00094cbe0) Stream removed, broadcasting: 3
I0508 11:42:26.752373       7 log.go:172] (0xc0009940b0) (0xc00094d900) Stream removed, broadcasting: 5
May  8 11:42:26.752: INFO: Found all expected endpoints: [netserver-0]
May  8 11:42:26.756: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.52 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3892 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  8 11:42:26.756: INFO: >>> kubeConfig: /root/.kube/config
I0508 11:42:26.824570       7 log.go:172] (0xc004438370) (0xc0013786e0) Create stream
I0508 11:42:26.824606       7 log.go:172] (0xc004438370) (0xc0013786e0) Stream added, broadcasting: 1
I0508 11:42:26.827196       7 log.go:172] (0xc004438370) Reply frame received for 1
I0508 11:42:26.827242       7 log.go:172] (0xc004438370) (0xc00094da40) Create stream
I0508 11:42:26.827261       7 log.go:172] (0xc004438370) (0xc00094da40) Stream added, broadcasting: 3
I0508 11:42:26.828287       7 log.go:172] (0xc004438370) Reply frame received for 3
I0508 11:42:26.828323       7 log.go:172] (0xc004438370) (0xc00094dea0) Create stream
I0508 11:42:26.828336       7 log.go:172] (0xc004438370) (0xc00094dea0) Stream added, broadcasting: 5
I0508 11:42:26.829446       7 log.go:172] (0xc004438370) Reply frame received for 5
I0508 11:42:27.908521       7 log.go:172] (0xc004438370) Data frame received for 3
I0508 11:42:27.908549       7 log.go:172] (0xc00094da40) (3) Data frame handling
I0508 11:42:27.908561       7 log.go:172] (0xc00094da40) (3) Data frame sent
I0508 11:42:27.909573       7 log.go:172] (0xc004438370) Data frame received for 5
I0508 11:42:27.909601       7 log.go:172] (0xc00094dea0) (5) Data frame handling
I0508 11:42:27.909624       7 log.go:172] (0xc004438370) Data frame received for 3
I0508 11:42:27.909639       7 log.go:172] (0xc00094da40) (3) Data frame handling
I0508 11:42:27.910924       7 log.go:172] (0xc004438370) Data frame received for 1
I0508 11:42:27.910945       7 log.go:172] (0xc0013786e0) (1) Data frame handling
I0508 11:42:27.910956       7 log.go:172] (0xc0013786e0) (1) Data frame sent
I0508 11:42:27.910979       7 log.go:172] (0xc004438370) (0xc0013786e0) Stream removed, broadcasting: 1
I0508 11:42:27.911070       7 log.go:172] (0xc004438370) (0xc0013786e0) Stream removed, broadcasting: 1
I0508 11:42:27.911087       7 log.go:172] (0xc004438370) (0xc00094da40) Stream removed, broadcasting: 3
I0508 11:42:27.911099       7 log.go:172] (0xc004438370) (0xc00094dea0) Stream removed, broadcasting: 5
May  8 11:42:27.911: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:42:27.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0508 11:42:27.911386       7 log.go:172] (0xc004438370) Go away received
STEP: Destroying namespace "pod-network-test-3892" for this suite.

• [SLOW TEST:32.594 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3468,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:42:27.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  8 11:42:28.029: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  8 11:42:28.055: INFO: Waiting for terminating namespaces to be deleted...
May  8 11:42:28.058: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  8 11:42:28.063: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 11:42:28.063: INFO: 	Container kindnet-cni ready: true, restart count 1
May  8 11:42:28.063: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 11:42:28.063: INFO: 	Container kube-proxy ready: true, restart count 0
May  8 11:42:28.063: INFO: netserver-0 from pod-network-test-3892 started at 2020-05-08 11:41:55 +0000 UTC (1 container statuses recorded)
May  8 11:42:28.063: INFO: 	Container webserver ready: true, restart count 0
May  8 11:42:28.063: INFO: test-container-pod from pod-network-test-3892 started at 2020-05-08 11:42:19 +0000 UTC (1 container statuses recorded)
May  8 11:42:28.063: INFO: 	Container webserver ready: true, restart count 0
May  8 11:42:28.063: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  8 11:42:28.068: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 11:42:28.068: INFO: 	Container kube-proxy ready: true, restart count 0
May  8 11:42:28.068: INFO: netserver-1 from pod-network-test-3892 started at 2020-05-08 11:41:55 +0000 UTC (1 container statuses recorded)
May  8 11:42:28.068: INFO: 	Container webserver ready: true, restart count 0
May  8 11:42:28.068: INFO: host-test-container-pod from pod-network-test-3892 started at 2020-05-08 11:42:19 +0000 UTC (1 container statuses recorded)
May  8 11:42:28.068: INFO: 	Container agnhost ready: true, restart count 0
May  8 11:42:28.068: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 11:42:28.068: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8a22cddb-52ee-4b3a-8c02-2bb16a90166c 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8a22cddb-52ee-4b3a-8c02-2bb16a90166c off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8a22cddb-52ee-4b3a-8c02-2bb16a90166c
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:42:38.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8155" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.808 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":201,"skipped":3471,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:42:38.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
May  8 11:42:38.814: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8205" to be "Succeeded or Failed"
May  8 11:42:38.833: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.86901ms
May  8 11:42:40.877: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063642773s
May  8 11:42:42.901: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087389467s
May  8 11:42:44.914: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100321171s
STEP: Saw pod success
May  8 11:42:44.914: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
May  8 11:42:44.926: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
May  8 11:42:45.010: INFO: Waiting for pod pod-host-path-test to disappear
May  8 11:42:45.035: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:42:45.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8205" for this suite.

• [SLOW TEST:6.323 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3501,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:42:45.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:42:49.370: INFO: Waiting up to 5m0s for pod "client-envvars-c724b8a8-b7b4-44d2-9a09-7f070429977e" in namespace "pods-2373" to be "Succeeded or Failed"
May  8 11:42:49.392: INFO: Pod "client-envvars-c724b8a8-b7b4-44d2-9a09-7f070429977e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.174444ms
May  8 11:42:51.603: INFO: Pod "client-envvars-c724b8a8-b7b4-44d2-9a09-7f070429977e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232174332s
May  8 11:42:53.608: INFO: Pod "client-envvars-c724b8a8-b7b4-44d2-9a09-7f070429977e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237138086s
STEP: Saw pod success
May  8 11:42:53.608: INFO: Pod "client-envvars-c724b8a8-b7b4-44d2-9a09-7f070429977e" satisfied condition "Succeeded or Failed"
May  8 11:42:53.610: INFO: Trying to get logs from node kali-worker pod client-envvars-c724b8a8-b7b4-44d2-9a09-7f070429977e container env3cont: 
STEP: delete the pod
May  8 11:42:53.648: INFO: Waiting for pod client-envvars-c724b8a8-b7b4-44d2-9a09-7f070429977e to disappear
May  8 11:42:53.657: INFO: Pod client-envvars-c724b8a8-b7b4-44d2-9a09-7f070429977e no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:42:53.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2373" for this suite.

• [SLOW TEST:8.614 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3510,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:42:53.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:43:08.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9329" for this suite.
STEP: Destroying namespace "nsdeletetest-5743" for this suite.
May  8 11:43:08.986: INFO: Namespace nsdeletetest-5743 was already deleted
STEP: Destroying namespace "nsdeletetest-8276" for this suite.

• [SLOW TEST:15.325 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":204,"skipped":3522,"failed":0}
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:43:08.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-d2bf4b4a-7807-4113-921a-73fc098a93d3
STEP: Creating a pod to test consume secrets
May  8 11:43:09.057: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8cea1adc-f18d-4cc1-88c9-b22bc71468b0" in namespace "projected-8406" to be "Succeeded or Failed"
May  8 11:43:09.078: INFO: Pod "pod-projected-secrets-8cea1adc-f18d-4cc1-88c9-b22bc71468b0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.121122ms
May  8 11:43:11.082: INFO: Pod "pod-projected-secrets-8cea1adc-f18d-4cc1-88c9-b22bc71468b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025170263s
May  8 11:43:13.093: INFO: Pod "pod-projected-secrets-8cea1adc-f18d-4cc1-88c9-b22bc71468b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035972898s
STEP: Saw pod success
May  8 11:43:13.093: INFO: Pod "pod-projected-secrets-8cea1adc-f18d-4cc1-88c9-b22bc71468b0" satisfied condition "Succeeded or Failed"
May  8 11:43:13.096: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-8cea1adc-f18d-4cc1-88c9-b22bc71468b0 container projected-secret-volume-test: 
STEP: delete the pod
May  8 11:43:13.518: INFO: Waiting for pod pod-projected-secrets-8cea1adc-f18d-4cc1-88c9-b22bc71468b0 to disappear
May  8 11:43:13.556: INFO: Pod pod-projected-secrets-8cea1adc-f18d-4cc1-88c9-b22bc71468b0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:43:13.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8406" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3522,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:43:13.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 11:43:14.537: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 11:43:16.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534994, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534994, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534994, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724534994, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:43:19.591: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:43:20.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8295" for this suite.
STEP: Destroying namespace "webhook-8295-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.614 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":206,"skipped":3550,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:43:20.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May  8 11:43:21.368: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May  8 11:43:23.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535001, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535001, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535001, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535001, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:43:26.421: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:43:26.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:43:27.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9794" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:7.535 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":207,"skipped":3593,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:43:27.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-7ab3f4a7-6389-444c-b883-83b8f157c1d1
STEP: Creating a pod to test consume secrets
May  8 11:43:28.199: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e395c6b-d477-4b5a-b0c6-c4e58b09f38d" in namespace "projected-2559" to be "Succeeded or Failed"
May  8 11:43:28.364: INFO: Pod "pod-projected-secrets-6e395c6b-d477-4b5a-b0c6-c4e58b09f38d": Phase="Pending", Reason="", readiness=false. Elapsed: 165.787795ms
May  8 11:43:30.440: INFO: Pod "pod-projected-secrets-6e395c6b-d477-4b5a-b0c6-c4e58b09f38d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241616119s
May  8 11:43:32.444: INFO: Pod "pod-projected-secrets-6e395c6b-d477-4b5a-b0c6-c4e58b09f38d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.245703987s
STEP: Saw pod success
May  8 11:43:32.444: INFO: Pod "pod-projected-secrets-6e395c6b-d477-4b5a-b0c6-c4e58b09f38d" satisfied condition "Succeeded or Failed"
May  8 11:43:32.447: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-6e395c6b-d477-4b5a-b0c6-c4e58b09f38d container secret-volume-test: 
STEP: delete the pod
May  8 11:43:32.603: INFO: Waiting for pod pod-projected-secrets-6e395c6b-d477-4b5a-b0c6-c4e58b09f38d to disappear
May  8 11:43:32.770: INFO: Pod pod-projected-secrets-6e395c6b-d477-4b5a-b0c6-c4e58b09f38d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:43:32.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2559" for this suite.

• [SLOW TEST:5.059 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3596,"failed":0}
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:43:32.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
May  8 11:43:33.356: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:43:33.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8715" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":209,"skipped":3596,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:43:33.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-7f1403c8-f229-4a32-bdf7-654f7a1b9302
STEP: Creating configMap with name cm-test-opt-upd-e2042d52-b2e1-452e-b6f0-8ffc981ec0c8
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-7f1403c8-f229-4a32-bdf7-654f7a1b9302
STEP: Updating configmap cm-test-opt-upd-e2042d52-b2e1-452e-b6f0-8ffc981ec0c8
STEP: Creating configMap with name cm-test-opt-create-b4ee7f8f-1eeb-4d8f-b17e-f9906385c147
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:43:43.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9972" for this suite.

• [SLOW TEST:10.405 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3597,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:43:43.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-41ae1386-3863-4052-b16d-158f68755a35 in namespace container-probe-512
May  8 11:43:48.075: INFO: Started pod test-webserver-41ae1386-3863-4052-b16d-158f68755a35 in namespace container-probe-512
STEP: checking the pod's current state and verifying that restartCount is present
May  8 11:43:48.078: INFO: Initial restart count of pod test-webserver-41ae1386-3863-4052-b16d-158f68755a35 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:47:49.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-512" for this suite.

• [SLOW TEST:245.165 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3598,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:47:49.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:47:49.132: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20f1c8f6-cd12-4afb-86a6-11d4a8a12f1e" in namespace "projected-9639" to be "Succeeded or Failed"
May  8 11:47:49.136: INFO: Pod "downwardapi-volume-20f1c8f6-cd12-4afb-86a6-11d4a8a12f1e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.788441ms
May  8 11:47:51.140: INFO: Pod "downwardapi-volume-20f1c8f6-cd12-4afb-86a6-11d4a8a12f1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008204042s
May  8 11:47:53.145: INFO: Pod "downwardapi-volume-20f1c8f6-cd12-4afb-86a6-11d4a8a12f1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012694026s
STEP: Saw pod success
May  8 11:47:53.145: INFO: Pod "downwardapi-volume-20f1c8f6-cd12-4afb-86a6-11d4a8a12f1e" satisfied condition "Succeeded or Failed"
May  8 11:47:53.148: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-20f1c8f6-cd12-4afb-86a6-11d4a8a12f1e container client-container: 
STEP: delete the pod
May  8 11:47:53.228: INFO: Waiting for pod downwardapi-volume-20f1c8f6-cd12-4afb-86a6-11d4a8a12f1e to disappear
May  8 11:47:53.238: INFO: Pod downwardapi-volume-20f1c8f6-cd12-4afb-86a6-11d4a8a12f1e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:47:53.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9639" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3609,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:47:53.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:47:53.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bf4d1f2-0ee9-4b31-88b0-628061588fce" in namespace "downward-api-1301" to be "Succeeded or Failed"
May  8 11:47:53.375: INFO: Pod "downwardapi-volume-1bf4d1f2-0ee9-4b31-88b0-628061588fce": Phase="Pending", Reason="", readiness=false. Elapsed: 20.099322ms
May  8 11:47:55.413: INFO: Pod "downwardapi-volume-1bf4d1f2-0ee9-4b31-88b0-628061588fce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058071937s
May  8 11:47:57.418: INFO: Pod "downwardapi-volume-1bf4d1f2-0ee9-4b31-88b0-628061588fce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063456871s
STEP: Saw pod success
May  8 11:47:57.418: INFO: Pod "downwardapi-volume-1bf4d1f2-0ee9-4b31-88b0-628061588fce" satisfied condition "Succeeded or Failed"
May  8 11:47:57.421: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-1bf4d1f2-0ee9-4b31-88b0-628061588fce container client-container: 
STEP: delete the pod
May  8 11:47:57.512: INFO: Waiting for pod downwardapi-volume-1bf4d1f2-0ee9-4b31-88b0-628061588fce to disappear
May  8 11:47:57.628: INFO: Pod downwardapi-volume-1bf4d1f2-0ee9-4b31-88b0-628061588fce no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:47:57.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1301" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3644,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:47:57.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
May  8 11:47:57.695: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
May  8 11:47:58.408: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
May  8 11:48:00.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535278, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535278, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535278, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535278, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:48:02.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535278, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535278, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535278, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535278, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:48:05.401: INFO: Waited 523.56105ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:48:05.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4125" for this suite.

• [SLOW TEST:8.526 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":214,"skipped":3689,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:48:06.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1612
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-1612
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1612
May  8 11:48:06.742: INFO: Found 0 stateful pods, waiting for 1
May  8 11:48:16.747: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
May  8 11:48:16.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  8 11:48:17.055: INFO: stderr: "I0508 11:48:16.901652    3524 log.go:172] (0xc000a7e0b0) (0xc00086e280) Create stream\nI0508 11:48:16.901718    3524 log.go:172] (0xc000a7e0b0) (0xc00086e280) Stream added, broadcasting: 1\nI0508 11:48:16.903717    3524 log.go:172] (0xc000a7e0b0) Reply frame received for 1\nI0508 11:48:16.903759    3524 log.go:172] (0xc000a7e0b0) (0xc000819220) Create stream\nI0508 11:48:16.903772    3524 log.go:172] (0xc000a7e0b0) (0xc000819220) Stream added, broadcasting: 3\nI0508 11:48:16.904761    3524 log.go:172] (0xc000a7e0b0) Reply frame received for 3\nI0508 11:48:16.904792    3524 log.go:172] (0xc000a7e0b0) (0xc00086e320) Create stream\nI0508 11:48:16.904802    3524 log.go:172] (0xc000a7e0b0) (0xc00086e320) Stream added, broadcasting: 5\nI0508 11:48:16.905984    3524 log.go:172] (0xc000a7e0b0) Reply frame received for 5\nI0508 11:48:16.995544    3524 log.go:172] (0xc000a7e0b0) Data frame received for 5\nI0508 11:48:16.995574    3524 log.go:172] (0xc00086e320) (5) Data frame handling\nI0508 11:48:16.995609    3524 log.go:172] (0xc00086e320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:48:17.048228    3524 log.go:172] (0xc000a7e0b0) Data frame received for 3\nI0508 11:48:17.048387    3524 log.go:172] (0xc000819220) (3) Data frame handling\nI0508 11:48:17.048505    3524 log.go:172] (0xc000819220) (3) Data frame sent\nI0508 11:48:17.048691    3524 log.go:172] (0xc000a7e0b0) Data frame received for 3\nI0508 11:48:17.048742    3524 log.go:172] (0xc000819220) (3) Data frame handling\nI0508 11:48:17.048782    3524 log.go:172] (0xc000a7e0b0) Data frame received for 5\nI0508 11:48:17.048805    3524 log.go:172] (0xc00086e320) (5) Data frame handling\nI0508 11:48:17.050766    3524 log.go:172] (0xc000a7e0b0) Data frame received for 1\nI0508 11:48:17.050796    3524 log.go:172] (0xc00086e280) (1) Data frame handling\nI0508 11:48:17.050810    3524 log.go:172] (0xc00086e280) (1) Data frame sent\nI0508 11:48:17.050848    3524 log.go:172] (0xc000a7e0b0) (0xc00086e280) Stream removed, broadcasting: 1\nI0508 11:48:17.050884    3524 log.go:172] (0xc000a7e0b0) Go away received\nI0508 11:48:17.051527    3524 log.go:172] (0xc000a7e0b0) (0xc00086e280) Stream removed, broadcasting: 1\nI0508 11:48:17.051553    3524 log.go:172] (0xc000a7e0b0) (0xc000819220) Stream removed, broadcasting: 3\nI0508 11:48:17.051573    3524 log.go:172] (0xc000a7e0b0) (0xc00086e320) Stream removed, broadcasting: 5\n"
May  8 11:48:17.056: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  8 11:48:17.056: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  8 11:48:17.059: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May  8 11:48:27.064: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May  8 11:48:27.064: INFO: Waiting for statefulset status.replicas updated to 0
May  8 11:48:27.102: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
May  8 11:48:27.102: INFO: ss-0  kali-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:27.102: INFO: 
May  8 11:48:27.102: INFO: StatefulSet ss has not reached scale 3, at 1
May  8 11:48:28.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.972540071s
May  8 11:48:29.642: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.967974881s
May  8 11:48:30.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.432757776s
May  8 11:48:31.655: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.42882803s
May  8 11:48:32.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.419820923s
May  8 11:48:33.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.414011512s
May  8 11:48:34.672: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.407759791s
May  8 11:48:35.678: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.40263394s
May  8 11:48:36.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 396.246171ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1612
May  8 11:48:37.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:48:37.890: INFO: stderr: "I0508 11:48:37.813031    3545 log.go:172] (0xc000664420) (0xc000912320) Create stream\nI0508 11:48:37.813306    3545 log.go:172] (0xc000664420) (0xc000912320) Stream added, broadcasting: 1\nI0508 11:48:37.818862    3545 log.go:172] (0xc000664420) Reply frame received for 1\nI0508 11:48:37.818944    3545 log.go:172] (0xc000664420) (0xc0005b5680) Create stream\nI0508 11:48:37.818975    3545 log.go:172] (0xc000664420) (0xc0005b5680) Stream added, broadcasting: 3\nI0508 11:48:37.820041    3545 log.go:172] (0xc000664420) Reply frame received for 3\nI0508 11:48:37.820072    3545 log.go:172] (0xc000664420) (0xc0003deaa0) Create stream\nI0508 11:48:37.820082    3545 log.go:172] (0xc000664420) (0xc0003deaa0) Stream added, broadcasting: 5\nI0508 11:48:37.821287    3545 log.go:172] (0xc000664420) Reply frame received for 5\nI0508 11:48:37.882305    3545 log.go:172] (0xc000664420) Data frame received for 3\nI0508 11:48:37.882355    3545 log.go:172] (0xc0005b5680) (3) Data frame handling\nI0508 11:48:37.882380    3545 log.go:172] (0xc0005b5680) (3) Data frame sent\nI0508 11:48:37.882406    3545 log.go:172] (0xc000664420) Data frame received for 5\nI0508 11:48:37.882445    3545 log.go:172] (0xc0003deaa0) (5) Data frame handling\nI0508 11:48:37.882471    3545 log.go:172] (0xc0003deaa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0508 11:48:37.882501    3545 log.go:172] (0xc000664420) Data frame received for 5\nI0508 11:48:37.882521    3545 log.go:172] (0xc0003deaa0) (5) Data frame handling\nI0508 11:48:37.882553    3545 log.go:172] (0xc000664420) Data frame received for 3\nI0508 11:48:37.882575    3545 log.go:172] (0xc0005b5680) (3) Data frame handling\nI0508 11:48:37.884268    3545 log.go:172] (0xc000664420) Data frame received for 1\nI0508 11:48:37.884298    3545 log.go:172] (0xc000912320) (1) Data frame handling\nI0508 11:48:37.884315    3545 log.go:172] (0xc000912320) (1) Data frame sent\nI0508 11:48:37.884330    3545 log.go:172] (0xc000664420) (0xc000912320) Stream removed, broadcasting: 1\nI0508 11:48:37.884350    3545 log.go:172] (0xc000664420) Go away received\nI0508 11:48:37.884824    3545 log.go:172] (0xc000664420) (0xc000912320) Stream removed, broadcasting: 1\nI0508 11:48:37.884847    3545 log.go:172] (0xc000664420) (0xc0005b5680) Stream removed, broadcasting: 3\nI0508 11:48:37.884859    3545 log.go:172] (0xc000664420) (0xc0003deaa0) Stream removed, broadcasting: 5\n"
May  8 11:48:37.890: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  8 11:48:37.890: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  8 11:48:37.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:48:38.175: INFO: stderr: "I0508 11:48:38.101313    3565 log.go:172] (0xc00096c580) (0xc000916460) Create stream\nI0508 11:48:38.101440    3565 log.go:172] (0xc00096c580) (0xc000916460) Stream added, broadcasting: 1\nI0508 11:48:38.104905    3565 log.go:172] (0xc00096c580) Reply frame received for 1\nI0508 11:48:38.104962    3565 log.go:172] (0xc00096c580) (0xc0006277c0) Create stream\nI0508 11:48:38.104975    3565 log.go:172] (0xc00096c580) (0xc0006277c0) Stream added, broadcasting: 3\nI0508 11:48:38.106222    3565 log.go:172] (0xc00096c580) Reply frame received for 3\nI0508 11:48:38.106269    3565 log.go:172] (0xc00096c580) (0xc000436be0) Create stream\nI0508 11:48:38.106285    3565 log.go:172] (0xc00096c580) (0xc000436be0) Stream added, broadcasting: 5\nI0508 11:48:38.106989    3565 log.go:172] (0xc00096c580) Reply frame received for 5\nI0508 11:48:38.167013    3565 log.go:172] (0xc00096c580) Data frame received for 5\nI0508 11:48:38.167063    3565 log.go:172] (0xc000436be0) (5) Data frame handling\nI0508 11:48:38.167076    3565 log.go:172] (0xc000436be0) (5) Data frame sent\nI0508 11:48:38.167082    3565 log.go:172] (0xc00096c580) Data frame received for 5\nI0508 11:48:38.167088    3565 log.go:172] (0xc000436be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0508 11:48:38.167111    3565 log.go:172] (0xc00096c580) Data frame received for 3\nI0508 11:48:38.167117    3565 log.go:172] (0xc0006277c0) (3) Data frame handling\nI0508 11:48:38.167123    3565 log.go:172] (0xc0006277c0) (3) Data frame sent\nI0508 11:48:38.167132    3565 log.go:172] (0xc00096c580) Data frame received for 3\nI0508 11:48:38.167141    3565 log.go:172] (0xc0006277c0) (3) Data frame handling\nI0508 11:48:38.168933    3565 log.go:172] (0xc00096c580) Data frame received for 1\nI0508 11:48:38.168953    3565 log.go:172] (0xc000916460) (1) Data frame handling\nI0508 11:48:38.168964    3565 log.go:172] (0xc000916460) (1) Data frame sent\nI0508 11:48:38.168979    3565 log.go:172] (0xc00096c580) (0xc000916460) Stream removed, broadcasting: 1\nI0508 11:48:38.168995    3565 log.go:172] (0xc00096c580) Go away received\nI0508 11:48:38.169679    3565 log.go:172] (0xc00096c580) (0xc000916460) Stream removed, broadcasting: 1\nI0508 11:48:38.169713    3565 log.go:172] (0xc00096c580) (0xc0006277c0) Stream removed, broadcasting: 3\nI0508 11:48:38.169733    3565 log.go:172] (0xc00096c580) (0xc000436be0) Stream removed, broadcasting: 5\n"
May  8 11:48:38.175: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  8 11:48:38.175: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  8 11:48:38.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:48:38.367: INFO: stderr: "I0508 11:48:38.306001    3584 log.go:172] (0xc0009a08f0) (0xc000146280) Create stream\nI0508 11:48:38.306049    3584 log.go:172] (0xc0009a08f0) (0xc000146280) Stream added, broadcasting: 1\nI0508 11:48:38.308408    3584 log.go:172] (0xc0009a08f0) Reply frame received for 1\nI0508 11:48:38.308442    3584 log.go:172] (0xc0009a08f0) (0xc0009b6000) Create stream\nI0508 11:48:38.308451    3584 log.go:172] (0xc0009a08f0) (0xc0009b6000) Stream added, broadcasting: 3\nI0508 11:48:38.309391    3584 log.go:172] (0xc0009a08f0) Reply frame received for 3\nI0508 11:48:38.309422    3584 log.go:172] (0xc0009a08f0) (0xc0006bd220) Create stream\nI0508 11:48:38.309431    3584 log.go:172] (0xc0009a08f0) (0xc0006bd220) Stream added, broadcasting: 5\nI0508 11:48:38.310235    3584 log.go:172] (0xc0009a08f0) Reply frame received for 5\nI0508 11:48:38.360188    3584 log.go:172] (0xc0009a08f0) Data frame received for 5\nI0508 11:48:38.360252    3584 log.go:172] (0xc0006bd220) (5) Data frame handling\nI0508 11:48:38.360279    3584 log.go:172] (0xc0006bd220) (5) Data frame sent\nI0508 11:48:38.360308    3584 log.go:172] (0xc0009a08f0) Data frame received for 5\nI0508 11:48:38.360326    3584 log.go:172] (0xc0006bd220) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0508 11:48:38.360361    3584 log.go:172] (0xc0009a08f0) Data frame received for 3\nI0508 11:48:38.360397    3584 log.go:172] (0xc0009b6000) (3) Data frame handling\nI0508 11:48:38.360422    3584 log.go:172] (0xc0009b6000) (3) Data frame sent\nI0508 11:48:38.360498    3584 log.go:172] (0xc0009a08f0) Data frame received for 3\nI0508 11:48:38.360535    3584 log.go:172] (0xc0009b6000) (3) Data frame handling\nI0508 11:48:38.362404    3584 log.go:172] (0xc0009a08f0) Data frame received for 1\nI0508 11:48:38.362441    3584 log.go:172] (0xc000146280) (1) Data frame handling\nI0508 11:48:38.362463    3584 log.go:172] (0xc000146280) (1) Data frame sent\nI0508 11:48:38.362489    3584 log.go:172] (0xc0009a08f0) (0xc000146280) Stream removed, broadcasting: 1\nI0508 11:48:38.362558    3584 log.go:172] (0xc0009a08f0) Go away received\nI0508 11:48:38.362964    3584 log.go:172] (0xc0009a08f0) (0xc000146280) Stream removed, broadcasting: 1\nI0508 11:48:38.362986    3584 log.go:172] (0xc0009a08f0) (0xc0009b6000) Stream removed, broadcasting: 3\nI0508 11:48:38.362998    3584 log.go:172] (0xc0009a08f0) (0xc0006bd220) Stream removed, broadcasting: 5\n"
May  8 11:48:38.368: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  8 11:48:38.368: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  8 11:48:38.371: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:48:38.371: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May  8 11:48:38.371: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
May  8 11:48:38.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  8 11:48:38.584: INFO: stderr: "I0508 11:48:38.514257    3604 log.go:172] (0xc00003a420) (0xc0004ec960) Create stream\nI0508 11:48:38.514469    3604 log.go:172] (0xc00003a420) (0xc0004ec960) Stream added, broadcasting: 1\nI0508 11:48:38.518020    3604 log.go:172] (0xc00003a420) Reply frame received for 1\nI0508 11:48:38.518087    3604 log.go:172] (0xc00003a420) (0xc0009b2000) Create stream\nI0508 11:48:38.518105    3604 log.go:172] (0xc00003a420) (0xc0009b2000) Stream added, broadcasting: 3\nI0508 11:48:38.519982    3604 log.go:172] (0xc00003a420) Reply frame received for 3\nI0508 11:48:38.520022    3604 log.go:172] (0xc00003a420) (0xc0009b20a0) Create stream\nI0508 11:48:38.520036    3604 log.go:172] (0xc00003a420) (0xc0009b20a0) Stream added, broadcasting: 5\nI0508 11:48:38.521102    3604 log.go:172] (0xc00003a420) Reply frame received for 5\nI0508 11:48:38.579140    3604 log.go:172] (0xc00003a420) Data frame received for 3\nI0508 11:48:38.579196    3604 log.go:172] (0xc00003a420) Data frame received for 5\nI0508 11:48:38.579237    3604 log.go:172] (0xc0009b20a0) (5) Data frame handling\nI0508 11:48:38.579258    3604 log.go:172] (0xc0009b20a0) (5) Data frame sent\nI0508 11:48:38.579271    3604 log.go:172] (0xc00003a420) Data frame received for 5\nI0508 11:48:38.579280    3604 log.go:172] (0xc0009b20a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:48:38.579312    3604 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0508 11:48:38.579336    3604 log.go:172] (0xc0009b2000) (3) Data frame sent\nI0508 11:48:38.579349    3604 log.go:172] (0xc00003a420) Data frame received for 3\nI0508 11:48:38.579360    3604 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0508 11:48:38.580372    3604 log.go:172] (0xc00003a420) Data frame received for 1\nI0508 11:48:38.580387    3604 log.go:172] (0xc0004ec960) (1) Data frame handling\nI0508 11:48:38.580394    3604 log.go:172] (0xc0004ec960) (1) Data frame sent\nI0508 11:48:38.580402    3604 log.go:172] (0xc00003a420) (0xc0004ec960) Stream removed, broadcasting: 1\nI0508 11:48:38.580612    3604 log.go:172] (0xc00003a420) Go away received\nI0508 11:48:38.580698    3604 log.go:172] (0xc00003a420) (0xc0004ec960) Stream removed, broadcasting: 1\nI0508 11:48:38.580793    3604 log.go:172] (0xc00003a420) (0xc0009b2000) Stream removed, broadcasting: 3\nI0508 11:48:38.580815    3604 log.go:172] (0xc00003a420) (0xc0009b20a0) Stream removed, broadcasting: 5\n"
May  8 11:48:38.585: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  8 11:48:38.585: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  8 11:48:38.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  8 11:48:38.834: INFO: stderr: "I0508 11:48:38.716171    3624 log.go:172] (0xc000514b00) (0xc0006cb400) Create stream\nI0508 11:48:38.716239    3624 log.go:172] (0xc000514b00) (0xc0006cb400) Stream added, broadcasting: 1\nI0508 11:48:38.726753    3624 log.go:172] (0xc000514b00) Reply frame received for 1\nI0508 11:48:38.726810    3624 log.go:172] (0xc000514b00) (0xc000a36000) Create stream\nI0508 11:48:38.726825    3624 log.go:172] (0xc000514b00) (0xc000a36000) Stream added, broadcasting: 3\nI0508 11:48:38.728704    3624 log.go:172] (0xc000514b00) Reply frame received for 3\nI0508 11:48:38.728733    3624 log.go:172] (0xc000514b00) (0xc000a360a0) Create stream\nI0508 11:48:38.728742    3624 log.go:172] (0xc000514b00) (0xc000a360a0) Stream added, broadcasting: 5\nI0508 11:48:38.729823    3624 log.go:172] (0xc000514b00) Reply frame received for 5\nI0508 11:48:38.793780    3624 log.go:172] (0xc000514b00) Data frame received for 5\nI0508 11:48:38.793807    3624 log.go:172] (0xc000a360a0) (5) Data frame handling\nI0508 11:48:38.793826    3624 log.go:172] (0xc000a360a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:48:38.824770    3624 log.go:172] (0xc000514b00) Data frame received for 5\nI0508 11:48:38.824808    3624 log.go:172] (0xc000a360a0) (5) Data frame handling\nI0508 11:48:38.824844    3624 log.go:172] (0xc000514b00) Data frame received for 3\nI0508 11:48:38.824876    3624 log.go:172] (0xc000a36000) (3) Data frame handling\nI0508 11:48:38.824904    3624 log.go:172] (0xc000a36000) (3) Data frame sent\nI0508 11:48:38.824917    3624 log.go:172] (0xc000514b00) Data frame received for 3\nI0508 11:48:38.824942    3624 log.go:172] (0xc000a36000) (3) Data frame handling\nI0508 11:48:38.827152    3624 log.go:172] (0xc000514b00) Data frame received for 1\nI0508 11:48:38.827184    3624 log.go:172] (0xc0006cb400) (1) Data frame handling\nI0508 11:48:38.827220    3624 log.go:172] (0xc0006cb400) (1) Data frame sent\nI0508 11:48:38.827249    3624 log.go:172] (0xc000514b00) (0xc0006cb400) Stream removed, broadcasting: 1\nI0508 11:48:38.827277    3624 log.go:172] (0xc000514b00) Go away received\nI0508 11:48:38.827839    3624 log.go:172] (0xc000514b00) (0xc0006cb400) Stream removed, broadcasting: 1\nI0508 11:48:38.827871    3624 log.go:172] (0xc000514b00) (0xc000a36000) Stream removed, broadcasting: 3\nI0508 11:48:38.827891    3624 log.go:172] (0xc000514b00) (0xc000a360a0) Stream removed, broadcasting: 5\n"
May  8 11:48:38.834: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  8 11:48:38.834: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  8 11:48:38.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  8 11:48:39.084: INFO: stderr: "I0508 11:48:38.975009    3645 log.go:172] (0xc00096c000) (0xc000b94000) Create stream\nI0508 11:48:38.975085    3645 log.go:172] (0xc00096c000) (0xc000b94000) Stream added, broadcasting: 1\nI0508 11:48:38.978044    3645 log.go:172] (0xc00096c000) Reply frame received for 1\nI0508 11:48:38.978084    3645 log.go:172] (0xc00096c000) (0xc0003e0a00) Create stream\nI0508 11:48:38.978093    3645 log.go:172] (0xc00096c000) (0xc0003e0a00) Stream added, broadcasting: 3\nI0508 11:48:38.978953    3645 log.go:172] (0xc00096c000) Reply frame received for 3\nI0508 11:48:38.978993    3645 log.go:172] (0xc00096c000) (0xc00098a000) Create stream\nI0508 11:48:38.979007    3645 log.go:172] (0xc00096c000) (0xc00098a000) Stream added, broadcasting: 5\nI0508 11:48:38.980035    3645 log.go:172] (0xc00096c000) Reply frame received for 5\nI0508 11:48:39.036896    3645 log.go:172] (0xc00096c000) Data frame received for 5\nI0508 11:48:39.036954    3645 log.go:172] (0xc00098a000) (5) Data frame handling\nI0508 11:48:39.036986    3645 log.go:172] (0xc00098a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0508 11:48:39.076818    3645 log.go:172] (0xc00096c000) Data frame received for 3\nI0508 11:48:39.076849    3645 log.go:172] (0xc0003e0a00) (3) Data frame handling\nI0508 11:48:39.076865    3645 log.go:172] (0xc0003e0a00) (3) Data frame sent\nI0508 11:48:39.076874    3645 log.go:172] (0xc00096c000) Data frame received for 3\nI0508 11:48:39.076880    3645 log.go:172] (0xc0003e0a00) (3) Data frame handling\nI0508 11:48:39.077376    3645 log.go:172] (0xc00096c000) Data frame received for 5\nI0508 11:48:39.077404    3645 log.go:172] (0xc00098a000) (5) Data frame handling\nI0508 11:48:39.079176    3645 log.go:172] (0xc00096c000) Data frame received for 1\nI0508 11:48:39.079189    3645 log.go:172] (0xc000b94000) (1) Data frame handling\nI0508 11:48:39.079195    3645 log.go:172] (0xc000b94000) (1) Data frame sent\nI0508 11:48:39.079203    3645 log.go:172] (0xc00096c000) (0xc000b94000) Stream removed, broadcasting: 1\nI0508 11:48:39.079273    3645 log.go:172] (0xc00096c000) Go away received\nI0508 11:48:39.079439    3645 log.go:172] (0xc00096c000) (0xc000b94000) Stream removed, broadcasting: 1\nI0508 11:48:39.079458    3645 log.go:172] (0xc00096c000) (0xc0003e0a00) Stream removed, broadcasting: 3\nI0508 11:48:39.079467    3645 log.go:172] (0xc00096c000) (0xc00098a000) Stream removed, broadcasting: 5\n"
May  8 11:48:39.084: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  8 11:48:39.084: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  8 11:48:39.084: INFO: Waiting for statefulset status.replicas updated to 0
May  8 11:48:39.107: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
May  8 11:48:49.115: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May  8 11:48:49.115: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May  8 11:48:49.115: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May  8 11:48:49.137: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:49.137: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:49.137: INFO: ss-1  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:49.137: INFO: ss-2  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:49.137: INFO: 
May  8 11:48:49.137: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:50.216: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:50.216: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:50.216: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:50.216: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:50.216: INFO: 
May  8 11:48:50.216: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:51.221: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:51.221: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:51.221: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:51.221: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:51.221: INFO: 
May  8 11:48:51.221: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:52.227: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:52.227: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:52.227: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:52.227: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:52.227: INFO: 
May  8 11:48:52.227: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:53.231: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:53.231: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:53.231: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:53.231: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:53.231: INFO: 
May  8 11:48:53.231: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:54.236: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:54.236: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:54.237: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:54.237: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:54.237: INFO: 
May  8 11:48:54.237: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:55.241: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:55.241: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:55.241: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:55.241: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:55.241: INFO: 
May  8 11:48:55.241: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:56.245: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:56.245: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:56.245: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:56.245: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:56.245: INFO: 
May  8 11:48:56.245: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:57.250: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:57.250: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:57.250: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:57.250: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:57.250: INFO: 
May  8 11:48:57.250: INFO: StatefulSet ss has not reached scale 0, at 3
May  8 11:48:58.255: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
May  8 11:48:58.255: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:06 +0000 UTC  }]
May  8 11:48:58.255: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:58.255: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-08 11:48:27 +0000 UTC  }]
May  8 11:48:58.255: INFO: 
May  8 11:48:58.255: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1612
May  8 11:48:59.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:48:59.385: INFO: rc: 1
May  8 11:48:59.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
May  8 11:49:09.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:49:09.476: INFO: rc: 1
May  8 11:49:09.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:49:19.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:49:19.568: INFO: rc: 1
May  8 11:49:19.569: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:49:29.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:49:29.669: INFO: rc: 1
May  8 11:49:29.669: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:49:39.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:49:39.779: INFO: rc: 1
May  8 11:49:39.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:49:49.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:49:49.883: INFO: rc: 1
May  8 11:49:49.883: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:49:59.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:49:59.990: INFO: rc: 1
May  8 11:49:59.990: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:50:09.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:50:10.089: INFO: rc: 1
May  8 11:50:10.089: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:50:20.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:50:20.316: INFO: rc: 1
May  8 11:50:20.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:50:30.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:50:33.694: INFO: rc: 1
May  8 11:50:33.694: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:50:43.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:50:43.799: INFO: rc: 1
May  8 11:50:43.799: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:50:53.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:50:53.903: INFO: rc: 1
May  8 11:50:53.903: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:51:03.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:51:04.002: INFO: rc: 1
May  8 11:51:04.002: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:51:14.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:51:14.108: INFO: rc: 1
May  8 11:51:14.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:51:24.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:51:24.204: INFO: rc: 1
May  8 11:51:24.204: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:51:34.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:51:34.333: INFO: rc: 1
May  8 11:51:34.333: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:51:44.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:51:44.430: INFO: rc: 1
May  8 11:51:44.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:51:54.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:51:54.529: INFO: rc: 1
May  8 11:51:54.529: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:52:04.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:52:04.659: INFO: rc: 1
May  8 11:52:04.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:52:14.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:52:14.788: INFO: rc: 1
May  8 11:52:14.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:52:24.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:52:24.884: INFO: rc: 1
May  8 11:52:24.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:52:34.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:52:34.982: INFO: rc: 1
May  8 11:52:34.982: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:52:44.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:52:45.082: INFO: rc: 1
May  8 11:52:45.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:52:55.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:52:55.178: INFO: rc: 1
May  8 11:52:55.178: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:53:05.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:53:05.277: INFO: rc: 1
May  8 11:53:05.277: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:53:15.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:53:15.373: INFO: rc: 1
May  8 11:53:15.373: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:53:25.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:53:25.468: INFO: rc: 1
May  8 11:53:25.468: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:53:35.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:53:35.563: INFO: rc: 1
May  8 11:53:35.563: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:53:45.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:53:45.657: INFO: rc: 1
May  8 11:53:45.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:53:55.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:53:55.755: INFO: rc: 1
May  8 11:53:55.755: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
May  8 11:54:05.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1612 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  8 11:54:05.846: INFO: rc: 1
May  8 11:54:05.846: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
May  8 11:54:05.846: INFO: Scaling statefulset ss to 0
May  8 11:54:05.856: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  8 11:54:05.859: INFO: Deleting all statefulset in ns statefulset-1612
May  8 11:54:05.861: INFO: Scaling statefulset ss to 0
May  8 11:54:05.870: INFO: Waiting for statefulset status.replicas updated to 0
May  8 11:54:05.872: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:54:05.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1612" for this suite.

• [SLOW TEST:359.729 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":215,"skipped":3730,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:54:05.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
May  8 11:54:05.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
May  8 11:54:18.054: INFO: >>> kubeConfig: /root/.kube/config
May  8 11:54:21.009: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:54:31.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7143" for this suite.

• [SLOW TEST:25.850 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":216,"skipped":3748,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:54:31.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:54:31.803: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:54:38.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6907" for this suite.

• [SLOW TEST:6.323 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":217,"skipped":3782,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:54:38.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
May  8 11:54:38.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4481 /api/v1/namespaces/watch-4481/configmaps/e2e-watch-test-watch-closed 83ab3dc4-7342-43d0-a9ff-9d0c35927ccc 2582421 0 2020-05-08 11:54:38 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-08 11:54:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  8 11:54:38.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4481 /api/v1/namespaces/watch-4481/configmaps/e2e-watch-test-watch-closed 83ab3dc4-7342-43d0-a9ff-9d0c35927ccc 2582422 0 2020-05-08 11:54:38 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-08 11:54:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
May  8 11:54:38.262: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4481 /api/v1/namespaces/watch-4481/configmaps/e2e-watch-test-watch-closed 83ab3dc4-7342-43d0-a9ff-9d0c35927ccc 2582423 0 2020-05-08 11:54:38 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-08 11:54:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  8 11:54:38.262: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4481 /api/v1/namespaces/watch-4481/configmaps/e2e-watch-test-watch-closed 83ab3dc4-7342-43d0-a9ff-9d0c35927ccc 2582424 0 2020-05-08 11:54:38 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-08 11:54:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:54:38.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4481" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":218,"skipped":3786,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:54:38.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May  8 11:54:42.841: INFO: Successfully updated pod "pod-update-7710c436-ff6f-486b-8431-33a4c38cbaa4"
STEP: verifying the updated pod is in kubernetes
May  8 11:54:42.847: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:54:42.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7921" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3832,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:54:42.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:54:42.991: INFO: Creating deployment "test-recreate-deployment"
May  8 11:54:43.002: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
May  8 11:54:43.019: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
May  8 11:54:45.026: INFO: Waiting deployment "test-recreate-deployment" to complete
May  8 11:54:45.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535683, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535683, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535683, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535683, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:54:47.033: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
May  8 11:54:47.041: INFO: Updating deployment test-recreate-deployment
May  8 11:54:47.041: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  8 11:54:47.748: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3752 /apis/apps/v1/namespaces/deployment-3752/deployments/test-recreate-deployment f0f5d430-99c4-4a35-9e24-0643954e8636 2582519 2 2020-05-08 11:54:42 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-08 11:54:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-08 11:54:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005bac1d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-08 11:54:47 +0000 UTC,LastTransitionTime:2020-05-08 11:54:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-08 11:54:47 +0000 UTC,LastTransitionTime:2020-05-08 11:54:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

May  8 11:54:47.753: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-3752 /apis/apps/v1/namespaces/deployment-3752/replicasets/test-recreate-deployment-d5667d9c7 cc6c9fd1-d0e6-4105-b5d5-55504eb11c36 2582514 1 2020-05-08 11:54:47 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f0f5d430-99c4-4a35-9e24-0643954e8636 0xc005b8fc40 0xc005b8fc41}] []  [{kube-controller-manager Update apps/v1 2020-05-08 11:54:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 48 102 53 100 52 51 48 45 57 57 99 52 45 52 97 51 53 45 57 101 50 52 45 48 54 52 51 57 53 52 101 56 54 51 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005b8fcd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  8 11:54:47.753: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
May  8 11:54:47.754: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-3752 /apis/apps/v1/namespaces/deployment-3752/replicasets/test-recreate-deployment-74d98b5f7c 8bb780ce-c715-41b6-8b40-b82f6ddf2b38 2582505 2 2020-05-08 11:54:43 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f0f5d430-99c4-4a35-9e24-0643954e8636 0xc005b8fb07 0xc005b8fb08}] []  [{kube-controller-manager Update apps/v1 2020-05-08 11:54:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 48 102 53 100 52 51 48 45 57 57 99 52 45 52 97 51 53 45 57 101 50 52 45 48 54 52 51 57 53 52 101 56 54 51 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005b8fba8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  8 11:54:47.757: INFO: Pod "test-recreate-deployment-d5667d9c7-vqx7w" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-vqx7w test-recreate-deployment-d5667d9c7- deployment-3752 /api/v1/namespaces/deployment-3752/pods/test-recreate-deployment-d5667d9c7-vqx7w e5befa88-e46d-48a4-978b-fa86f5b29033 2582517 0 2020-05-08 11:54:47 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 cc6c9fd1-d0e6-4105-b5d5-55504eb11c36 0xc005bce2a0 0xc005bce2a1}] []  [{kube-controller-manager Update v1 2020-05-08 11:54:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 99 54 99 57 102 100 49 45 100 48 101 54 45 52 49 48 53 45 98 53 100 53 45 53 53 53 48 52 101 98 49 49 99 51 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:54:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rlxqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rlxqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rlxqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:54:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:54:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:54:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:54:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-08 11:54:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:54:47.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3752" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":220,"skipped":3849,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:54:47.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:54:48.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1" in namespace "downward-api-2733" to be "Succeeded or Failed"
May  8 11:54:48.136: INFO: Pod "downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 80.236955ms
May  8 11:54:50.148: INFO: Pod "downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092367994s
May  8 11:54:52.152: INFO: Pod "downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096340411s
May  8 11:54:54.163: INFO: Pod "downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107923613s
STEP: Saw pod success
May  8 11:54:54.163: INFO: Pod "downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1" satisfied condition "Succeeded or Failed"
May  8 11:54:54.166: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1 container client-container: 
STEP: delete the pod
May  8 11:54:54.248: INFO: Waiting for pod downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1 to disappear
May  8 11:54:54.259: INFO: Pod downwardapi-volume-0f16c81e-3be1-4283-a0a1-967c7390c2f1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:54:54.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2733" for this suite.

• [SLOW TEST:6.506 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3860,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:54:54.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 11:54:54.826: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 11:54:56.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535694, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535694, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535694, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535694, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:54:58.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535694, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535694, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535694, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535694, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:55:01.886: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:55:12.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2377" for this suite.
STEP: Destroying namespace "webhook-2377-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.950 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":222,"skipped":3891,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:55:12.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:55:23.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5425" for this suite.

• [SLOW TEST:11.155 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":223,"skipped":3918,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:55:23.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:55:28.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9111" for this suite.

• [SLOW TEST:5.132 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":224,"skipped":3925,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:55:28.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
May  8 11:55:28.577: INFO: Created pod &Pod{ObjectMeta:{dns-5385  dns-5385 /api/v1/namespaces/dns-5385/pods/dns-5385 7b5d3ada-6104-414a-a96d-f7a90ba66808 2582901 0 2020-05-08 11:55:28 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-05-08 11:55:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nl4wr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nl4wr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nl4wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  8 11:55:28.580: INFO: The status of Pod dns-5385 is Pending, waiting for it to be Running (with Ready = true)
May  8 11:55:30.585: INFO: The status of Pod dns-5385 is Pending, waiting for it to be Running (with Ready = true)
May  8 11:55:32.584: INFO: The status of Pod dns-5385 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
May  8 11:55:32.584: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5385 PodName:dns-5385 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  8 11:55:32.584: INFO: >>> kubeConfig: /root/.kube/config
I0508 11:55:32.628007       7 log.go:172] (0xc0028fa420) (0xc0014ea000) Create stream
I0508 11:55:32.628045       7 log.go:172] (0xc0028fa420) (0xc0014ea000) Stream added, broadcasting: 1
I0508 11:55:32.630143       7 log.go:172] (0xc0028fa420) Reply frame received for 1
I0508 11:55:32.630204       7 log.go:172] (0xc0028fa420) (0xc0014ea140) Create stream
I0508 11:55:32.630239       7 log.go:172] (0xc0028fa420) (0xc0014ea140) Stream added, broadcasting: 3
I0508 11:55:32.631015       7 log.go:172] (0xc0028fa420) Reply frame received for 3
I0508 11:55:32.631055       7 log.go:172] (0xc0028fa420) (0xc0014ea280) Create stream
I0508 11:55:32.631064       7 log.go:172] (0xc0028fa420) (0xc0014ea280) Stream added, broadcasting: 5
I0508 11:55:32.631707       7 log.go:172] (0xc0028fa420) Reply frame received for 5
I0508 11:55:32.739700       7 log.go:172] (0xc0028fa420) Data frame received for 3
I0508 11:55:32.739732       7 log.go:172] (0xc0014ea140) (3) Data frame handling
I0508 11:55:32.739762       7 log.go:172] (0xc0014ea140) (3) Data frame sent
I0508 11:55:32.740787       7 log.go:172] (0xc0028fa420) Data frame received for 3
I0508 11:55:32.740838       7 log.go:172] (0xc0014ea140) (3) Data frame handling
I0508 11:55:32.740880       7 log.go:172] (0xc0028fa420) Data frame received for 5
I0508 11:55:32.740916       7 log.go:172] (0xc0014ea280) (5) Data frame handling
I0508 11:55:32.742898       7 log.go:172] (0xc0028fa420) Data frame received for 1
I0508 11:55:32.742922       7 log.go:172] (0xc0014ea000) (1) Data frame handling
I0508 11:55:32.742934       7 log.go:172] (0xc0014ea000) (1) Data frame sent
I0508 11:55:32.742944       7 log.go:172] (0xc0028fa420) (0xc0014ea000) Stream removed, broadcasting: 1
I0508 11:55:32.742955       7 log.go:172] (0xc0028fa420) Go away received
I0508 11:55:32.743057       7 log.go:172] (0xc0028fa420) (0xc0014ea000) Stream removed, broadcasting: 1
I0508 11:55:32.743081       7 log.go:172] (0xc0028fa420) (0xc0014ea140) Stream removed, broadcasting: 3
I0508 11:55:32.743092       7 log.go:172] (0xc0028fa420) (0xc0014ea280) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
May  8 11:55:32.743: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5385 PodName:dns-5385 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  8 11:55:32.743: INFO: >>> kubeConfig: /root/.kube/config
I0508 11:55:32.777319       7 log.go:172] (0xc0028fa6e0) (0xc0014ea3c0) Create stream
I0508 11:55:32.777345       7 log.go:172] (0xc0028fa6e0) (0xc0014ea3c0) Stream added, broadcasting: 1
I0508 11:55:32.779180       7 log.go:172] (0xc0028fa6e0) Reply frame received for 1
I0508 11:55:32.779236       7 log.go:172] (0xc0028fa6e0) (0xc0014ea460) Create stream
I0508 11:55:32.779255       7 log.go:172] (0xc0028fa6e0) (0xc0014ea460) Stream added, broadcasting: 3
I0508 11:55:32.780066       7 log.go:172] (0xc0028fa6e0) Reply frame received for 3
I0508 11:55:32.780096       7 log.go:172] (0xc0028fa6e0) (0xc0014ea5a0) Create stream
I0508 11:55:32.780104       7 log.go:172] (0xc0028fa6e0) (0xc0014ea5a0) Stream added, broadcasting: 5
I0508 11:55:32.781003       7 log.go:172] (0xc0028fa6e0) Reply frame received for 5
I0508 11:55:32.856982       7 log.go:172] (0xc0028fa6e0) Data frame received for 3
I0508 11:55:32.857011       7 log.go:172] (0xc0014ea460) (3) Data frame handling
I0508 11:55:32.857028       7 log.go:172] (0xc0014ea460) (3) Data frame sent
I0508 11:55:32.858082       7 log.go:172] (0xc0028fa6e0) Data frame received for 3
I0508 11:55:32.858116       7 log.go:172] (0xc0014ea460) (3) Data frame handling
I0508 11:55:32.858160       7 log.go:172] (0xc0028fa6e0) Data frame received for 5
I0508 11:55:32.858201       7 log.go:172] (0xc0014ea5a0) (5) Data frame handling
I0508 11:55:32.859397       7 log.go:172] (0xc0028fa6e0) Data frame received for 1
I0508 11:55:32.859428       7 log.go:172] (0xc0014ea3c0) (1) Data frame handling
I0508 11:55:32.859449       7 log.go:172] (0xc0014ea3c0) (1) Data frame sent
I0508 11:55:32.859478       7 log.go:172] (0xc0028fa6e0) (0xc0014ea3c0) Stream removed, broadcasting: 1
I0508 11:55:32.859501       7 log.go:172] (0xc0028fa6e0) Go away received
I0508 11:55:32.859659       7 log.go:172] (0xc0028fa6e0) (0xc0014ea3c0) Stream removed, broadcasting: 1
I0508 11:55:32.859688       7 log.go:172] (0xc0028fa6e0) (0xc0014ea460) Stream removed, broadcasting: 3
I0508 11:55:32.859706       7 log.go:172] (0xc0028fa6e0) (0xc0014ea5a0) Stream removed, broadcasting: 5
May  8 11:55:32.859: INFO: Deleting pod dns-5385...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:55:32.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5385" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":225,"skipped":3935,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:55:32.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
May  8 11:55:32.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
May  8 11:55:33.160: INFO: stderr: ""
May  8 11:55:33.161: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:55:33.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3352" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":226,"skipped":3936,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:55:33.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0508 11:56:13.711664       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  8 11:56:13.711: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:56:13.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3844" for this suite.

• [SLOW TEST:40.549 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":227,"skipped":3990,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:56:13.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 11:56:14.572: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 11:56:16.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535774, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535774, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535774, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535774, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 11:56:19.884: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:56:21.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5982" for this suite.
STEP: Destroying namespace "webhook-5982-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.602 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":228,"skipped":4001,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:56:22.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 11:56:22.969: INFO: Pod name rollover-pod: Found 0 pods out of 1
May  8 11:56:27.972: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  8 11:56:27.972: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
May  8 11:56:29.976: INFO: Creating deployment "test-rollover-deployment"
May  8 11:56:29.990: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
May  8 11:56:31.997: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
May  8 11:56:32.003: INFO: Ensure that both replica sets have 1 created replica
May  8 11:56:32.007: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
May  8 11:56:32.013: INFO: Updating deployment test-rollover-deployment
May  8 11:56:32.014: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
May  8 11:56:34.038: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
May  8 11:56:34.044: INFO: Make sure deployment "test-rollover-deployment" is complete
May  8 11:56:34.050: INFO: all replica sets need to contain the pod-template-hash label
May  8 11:56:34.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535792, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:56:36.060: INFO: all replica sets need to contain the pod-template-hash label
May  8 11:56:36.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535792, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:56:38.063: INFO: all replica sets need to contain the pod-template-hash label
May  8 11:56:38.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535796, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:56:40.059: INFO: all replica sets need to contain the pod-template-hash label
May  8 11:56:40.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535796, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:56:42.059: INFO: all replica sets need to contain the pod-template-hash label
May  8 11:56:42.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535796, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:56:44.058: INFO: all replica sets need to contain the pod-template-hash label
May  8 11:56:44.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535796, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:56:46.056: INFO: all replica sets need to contain the pod-template-hash label
May  8 11:56:46.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535796, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724535790, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 11:56:48.064: INFO: 
May  8 11:56:48.064: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  8 11:56:48.106: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-5026 /apis/apps/v1/namespaces/deployment-5026/deployments/test-rollover-deployment eb3c5823-8361-427a-96a5-596595264ec9 2583496 2 2020-05-08 11:56:29 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-08 11:56:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-08 11:56:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00461e218  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-08 11:56:30 +0000 UTC,LastTransitionTime:2020-05-08 11:56:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-05-08 11:56:46 +0000 UTC,LastTransitionTime:2020-05-08 11:56:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May  8 11:56:48.110: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-5026 /apis/apps/v1/namespaces/deployment-5026/replicasets/test-rollover-deployment-84f7f6f64b baf94eb6-2361-42e6-916b-34f4497c8439 2583484 2 2020-05-08 11:56:32 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment eb3c5823-8361-427a-96a5-596595264ec9 0xc00461e857 0xc00461e858}] []  [{kube-controller-manager Update apps/v1 2020-05-08 11:56:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 98 51 99 53 56 50 51 45 56 51 54 49 45 52 50 55 97 45 57 54 97 53 45 53 57 54 53 57 53 50 54 52 101 99 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00461e8e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  8 11:56:48.110: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
May  8 11:56:48.110: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-5026 /apis/apps/v1/namespaces/deployment-5026/replicasets/test-rollover-controller 649ae57d-d11a-47f8-a6f7-333338d91293 2583495 2 2020-05-08 11:56:22 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment eb3c5823-8361-427a-96a5-596595264ec9 0xc00461e647 0xc00461e648}] []  [{e2e.test Update apps/v1 2020-05-08 11:56:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-08 11:56:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 98 51 99 53 56 50 51 45 56 51 54 49 45 52 50 55 97 45 57 54 97 53 45 53 57 54 53 57 53 50 54 52 101 99 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00461e6e8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  8 11:56:48.111: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-5026 /apis/apps/v1/namespaces/deployment-5026/replicasets/test-rollover-deployment-5686c4cfd5 53faf387-099e-4361-8c2c-46c85fbb9134 2583431 2 2020-05-08 11:56:29 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment eb3c5823-8361-427a-96a5-596595264ec9 0xc00461e757 0xc00461e758}] []  [{kube-controller-manager Update apps/v1 2020-05-08 11:56:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 98 51 99 53 56 50 51 45 56 51 54 49 45 52 50 55 97 45 57 54 97 53 45 53 57 54 53 57 53 50 54 52 101 99 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00461e7e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  8 11:56:48.114: INFO: Pod "test-rollover-deployment-84f7f6f64b-ltlkm" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-ltlkm test-rollover-deployment-84f7f6f64b- deployment-5026 /api/v1/namespaces/deployment-5026/pods/test-rollover-deployment-84f7f6f64b-ltlkm 14dd7e2a-8ce1-43d0-9a38-dd44023640af 2583451 0 2020-05-08 11:56:32 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b baf94eb6-2361-42e6-916b-34f4497c8439 0xc0030a8c87 0xc0030a8c88}] []  [{kube-controller-manager Update v1 2020-05-08 11:56:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 97 102 57 52 101 98 54 45 50 51 54 49 45 52 50 101 54 45 57 49 54 98 45 51 52 102 52 52 57 55 99 56 52 51 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 11:56:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6xjtm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6xjtm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6xjtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:56:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:56:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:56:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 11:56:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.24,StartTime:2020-05-08 11:56:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 11:56:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://ea8f2013e5a48d2c49a0aab8ebe9475c78da6e6cbca85e02d6cbf61cbec589d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:56:48.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5026" for this suite.

• [SLOW TEST:25.800 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":229,"skipped":4006,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:56:48.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
May  8 11:56:48.229: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-1035 /api/v1/namespaces/watch-1035/configmaps/e2e-watch-test-resource-version 4c50c284-8d1d-4304-b77b-02989d50f6c0 2583510 0 2020-05-08 11:56:48 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-08 11:56:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  8 11:56:48.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-1035 /api/v1/namespaces/watch-1035/configmaps/e2e-watch-test-resource-version 4c50c284-8d1d-4304-b77b-02989d50f6c0 2583511 0 2020-05-08 11:56:48 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-08 11:56:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:56:48.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1035" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":230,"skipped":4007,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:56:48.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3807
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-3807
I0508 11:56:48.383577       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3807, replica count: 2
I0508 11:56:51.434067       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 11:56:54.434317       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  8 11:56:54.434: INFO: Creating new exec pod
May  8 11:56:59.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3807 execpod4sjgk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May  8 11:56:59.872: INFO: stderr: "I0508 11:56:59.764712    4331 log.go:172] (0xc0008c40b0) (0xc000a76320) Create stream\nI0508 11:56:59.764792    4331 log.go:172] (0xc0008c40b0) (0xc000a76320) Stream added, broadcasting: 1\nI0508 11:56:59.768186    4331 log.go:172] (0xc0008c40b0) Reply frame received for 1\nI0508 11:56:59.768239    4331 log.go:172] (0xc0008c40b0) (0xc000685540) Create stream\nI0508 11:56:59.768257    4331 log.go:172] (0xc0008c40b0) (0xc000685540) Stream added, broadcasting: 3\nI0508 11:56:59.769766    4331 log.go:172] (0xc0008c40b0) Reply frame received for 3\nI0508 11:56:59.769815    4331 log.go:172] (0xc0008c40b0) (0xc000a763c0) Create stream\nI0508 11:56:59.769849    4331 log.go:172] (0xc0008c40b0) (0xc000a763c0) Stream added, broadcasting: 5\nI0508 11:56:59.770982    4331 log.go:172] (0xc0008c40b0) Reply frame received for 5\nI0508 11:56:59.863623    4331 log.go:172] (0xc0008c40b0) Data frame received for 5\nI0508 11:56:59.863669    4331 log.go:172] (0xc000a763c0) (5) Data frame handling\nI0508 11:56:59.863708    4331 log.go:172] (0xc000a763c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0508 11:56:59.864157    4331 log.go:172] (0xc0008c40b0) Data frame received for 5\nI0508 11:56:59.864178    4331 log.go:172] (0xc000a763c0) (5) Data frame handling\nI0508 11:56:59.864186    4331 log.go:172] (0xc000a763c0) (5) Data frame sent\nI0508 11:56:59.864193    4331 log.go:172] (0xc0008c40b0) Data frame received for 5\nI0508 11:56:59.864200    4331 log.go:172] (0xc000a763c0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0508 11:56:59.864210    4331 log.go:172] (0xc0008c40b0) Data frame received for 3\nI0508 11:56:59.864236    4331 log.go:172] (0xc000685540) (3) Data frame handling\nI0508 11:56:59.866104    4331 log.go:172] (0xc0008c40b0) Data frame received for 1\nI0508 11:56:59.866123    4331 log.go:172] (0xc000a76320) (1) Data frame handling\nI0508 11:56:59.866144    4331 log.go:172] (0xc000a76320) (1) Data frame sent\nI0508 11:56:59.866157    4331 log.go:172] (0xc0008c40b0) (0xc000a76320) Stream removed, broadcasting: 1\nI0508 11:56:59.866174    4331 log.go:172] (0xc0008c40b0) Go away received\nI0508 11:56:59.867742    4331 log.go:172] (0xc0008c40b0) (0xc000a76320) Stream removed, broadcasting: 1\nI0508 11:56:59.867784    4331 log.go:172] (0xc0008c40b0) (0xc000685540) Stream removed, broadcasting: 3\nI0508 11:56:59.867797    4331 log.go:172] (0xc0008c40b0) (0xc000a763c0) Stream removed, broadcasting: 5\n"
May  8 11:56:59.872: INFO: stdout: ""
May  8 11:56:59.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3807 execpod4sjgk -- /bin/sh -x -c nc -zv -t -w 2 10.100.71.5 80'
May  8 11:57:00.080: INFO: stderr: "I0508 11:57:00.004861    4351 log.go:172] (0xc00003a0b0) (0xc000827400) Create stream\nI0508 11:57:00.004929    4351 log.go:172] (0xc00003a0b0) (0xc000827400) Stream added, broadcasting: 1\nI0508 11:57:00.006738    4351 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0508 11:57:00.006782    4351 log.go:172] (0xc00003a0b0) (0xc000c2a000) Create stream\nI0508 11:57:00.006796    4351 log.go:172] (0xc00003a0b0) (0xc000c2a000) Stream added, broadcasting: 3\nI0508 11:57:00.007828    4351 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0508 11:57:00.007876    4351 log.go:172] (0xc00003a0b0) (0xc000c2a0a0) Create stream\nI0508 11:57:00.007902    4351 log.go:172] (0xc00003a0b0) (0xc000c2a0a0) Stream added, broadcasting: 5\nI0508 11:57:00.008742    4351 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0508 11:57:00.072817    4351 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0508 11:57:00.072853    4351 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0508 11:57:00.072903    4351 log.go:172] (0xc000c2a0a0) (5) Data frame handling\nI0508 11:57:00.072923    4351 log.go:172] (0xc000c2a0a0) (5) Data frame sent\nI0508 11:57:00.072939    4351 log.go:172] (0xc00003a0b0) Data frame received for 5\n+ nc -zv -t -w 2 10.100.71.5 80\nConnection to 10.100.71.5 80 port [tcp/http] succeeded!\nI0508 11:57:00.073040    4351 log.go:172] (0xc000c2a0a0) (5) Data frame handling\nI0508 11:57:00.073297    4351 log.go:172] (0xc000c2a000) (3) Data frame handling\nI0508 11:57:00.074747    4351 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0508 11:57:00.074769    4351 log.go:172] (0xc000827400) (1) Data frame handling\nI0508 11:57:00.074785    4351 log.go:172] (0xc000827400) (1) Data frame sent\nI0508 11:57:00.074803    4351 log.go:172] (0xc00003a0b0) (0xc000827400) Stream removed, broadcasting: 1\nI0508 11:57:00.074821    4351 log.go:172] (0xc00003a0b0) Go away received\nI0508 11:57:00.075278    4351 log.go:172] (0xc00003a0b0) (0xc000827400) Stream removed, broadcasting: 1\nI0508 11:57:00.075307    4351 log.go:172] (0xc00003a0b0) (0xc000c2a000) Stream removed, broadcasting: 3\nI0508 11:57:00.075320    4351 log.go:172] (0xc00003a0b0) (0xc000c2a0a0) Stream removed, broadcasting: 5\n"
May  8 11:57:00.080: INFO: stdout: ""
May  8 11:57:00.080: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:57:00.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3807" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.937 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":231,"skipped":4014,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:57:00.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0508 11:57:10.380061       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  8 11:57:10.380: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:57:10.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6814" for this suite.

• [SLOW TEST:10.208 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":232,"skipped":4036,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:57:10.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
May  8 11:57:11.090: INFO: Pod name wrapped-volume-race-771e3bc5-8ea3-4f81-955e-6cc7fe8ea6ea: Found 0 pods out of 5
May  8 11:57:16.098: INFO: Pod name wrapped-volume-race-771e3bc5-8ea3-4f81-955e-6cc7fe8ea6ea: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-771e3bc5-8ea3-4f81-955e-6cc7fe8ea6ea in namespace emptydir-wrapper-7186, will wait for the garbage collector to delete the pods
May  8 11:57:30.182: INFO: Deleting ReplicationController wrapped-volume-race-771e3bc5-8ea3-4f81-955e-6cc7fe8ea6ea took: 7.933783ms
May  8 11:57:30.482: INFO: Terminating ReplicationController wrapped-volume-race-771e3bc5-8ea3-4f81-955e-6cc7fe8ea6ea pods took: 300.255761ms
STEP: Creating RC which spawns configmap-volume pods
May  8 11:57:43.747: INFO: Pod name wrapped-volume-race-c6f46b04-8e7a-4ac4-ac88-d67274e8637c: Found 0 pods out of 5
May  8 11:57:48.757: INFO: Pod name wrapped-volume-race-c6f46b04-8e7a-4ac4-ac88-d67274e8637c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c6f46b04-8e7a-4ac4-ac88-d67274e8637c in namespace emptydir-wrapper-7186, will wait for the garbage collector to delete the pods
May  8 11:58:03.786: INFO: Deleting ReplicationController wrapped-volume-race-c6f46b04-8e7a-4ac4-ac88-d67274e8637c took: 8.354246ms
May  8 11:58:04.086: INFO: Terminating ReplicationController wrapped-volume-race-c6f46b04-8e7a-4ac4-ac88-d67274e8637c pods took: 300.271143ms
STEP: Creating RC which spawns configmap-volume pods
May  8 11:58:14.047: INFO: Pod name wrapped-volume-race-529ba829-f7fd-44a3-97cb-8365ec3f8a15: Found 0 pods out of 5
May  8 11:58:19.068: INFO: Pod name wrapped-volume-race-529ba829-f7fd-44a3-97cb-8365ec3f8a15: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-529ba829-f7fd-44a3-97cb-8365ec3f8a15 in namespace emptydir-wrapper-7186, will wait for the garbage collector to delete the pods
May  8 11:58:31.244: INFO: Deleting ReplicationController wrapped-volume-race-529ba829-f7fd-44a3-97cb-8365ec3f8a15 took: 8.153796ms
May  8 11:58:31.645: INFO: Terminating ReplicationController wrapped-volume-race-529ba829-f7fd-44a3-97cb-8365ec3f8a15 pods took: 400.295427ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:58:44.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7186" for this suite.

• [SLOW TEST:93.814 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":233,"skipped":4057,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:58:44.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4722;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4722;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4722.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4722.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.237.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.237.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.237.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.237.184_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4722;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4722;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4722.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4722.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4722.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4722.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4722.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.237.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.237.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.237.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.237.184_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  8 11:58:52.455: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.460: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.467: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.472: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.478: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.484: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.502: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.568: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.874: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.879: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.886: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.891: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.898: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.903: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.910: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:52.924: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:53.000: INFO: Lookups using dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc jessie_udp@_http._tcp.dns-test-service.dns-4722.svc jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc]

May  8 11:58:58.004: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.007: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.010: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.014: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.017: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.020: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.042: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.046: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.163: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.166: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.169: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.173: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.176: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.182: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.185: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:58:58.204: INFO: Lookups using dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc jessie_udp@_http._tcp.dns-test-service.dns-4722.svc jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc]

May  8 11:59:03.005: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.009: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.017: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.020: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.024: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.028: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.031: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.051: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.055: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.058: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.061: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.065: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.069: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.072: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.075: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:03.097: INFO: Lookups using dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc jessie_udp@_http._tcp.dns-test-service.dns-4722.svc jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc]

May  8 11:59:08.006: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.010: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.016: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.019: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.022: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.025: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.027: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.046: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.090: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.095: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.099: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.101: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.104: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.107: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.108: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:08.123: INFO: Lookups using dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc jessie_udp@_http._tcp.dns-test-service.dns-4722.svc jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc]

May  8 11:59:13.013: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.016: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.036: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.039: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.042: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.045: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.048: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.051: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.071: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.101: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.105: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.108: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.111: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.119: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:13.138: INFO: Lookups using dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc jessie_udp@_http._tcp.dns-test-service.dns-4722.svc jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc]

May  8 11:59:18.004: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.008: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.011: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.015: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.018: INFO: Unable to read wheezy_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.021: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.023: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.027: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.078: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.102: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.105: INFO: Unable to read jessie_udp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.108: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722 from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.111: INFO: Unable to read jessie_udp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.121: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc from pod dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8: the server could not find the requested resource (get pods dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8)
May  8 11:59:18.148: INFO: Lookups using dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4722 wheezy_tcp@dns-test-service.dns-4722 wheezy_udp@dns-test-service.dns-4722.svc wheezy_tcp@dns-test-service.dns-4722.svc wheezy_udp@_http._tcp.dns-test-service.dns-4722.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4722.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4722 jessie_tcp@dns-test-service.dns-4722 jessie_udp@dns-test-service.dns-4722.svc jessie_tcp@dns-test-service.dns-4722.svc jessie_udp@_http._tcp.dns-test-service.dns-4722.svc jessie_tcp@_http._tcp.dns-test-service.dns-4722.svc]

May  8 11:59:23.093: INFO: DNS probes using dns-4722/dns-test-7fbd51a8-0e66-4bb2-b87b-3a66f326cae8 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:59:23.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4722" for this suite.

• [SLOW TEST:39.740 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":234,"skipped":4063,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:59:23.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-f961817e-39c2-4d44-b126-7289b30e5335
STEP: Creating a pod to test consume configMaps
May  8 11:59:24.070: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a4e0c0d-7e99-4b7a-97ac-85e784be905f" in namespace "projected-7946" to be "Succeeded or Failed"
May  8 11:59:24.078: INFO: Pod "pod-projected-configmaps-1a4e0c0d-7e99-4b7a-97ac-85e784be905f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.937958ms
May  8 11:59:26.155: INFO: Pod "pod-projected-configmaps-1a4e0c0d-7e99-4b7a-97ac-85e784be905f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085573936s
May  8 11:59:28.599: INFO: Pod "pod-projected-configmaps-1a4e0c0d-7e99-4b7a-97ac-85e784be905f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.529109995s
STEP: Saw pod success
May  8 11:59:28.599: INFO: Pod "pod-projected-configmaps-1a4e0c0d-7e99-4b7a-97ac-85e784be905f" satisfied condition "Succeeded or Failed"
May  8 11:59:28.976: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-1a4e0c0d-7e99-4b7a-97ac-85e784be905f container projected-configmap-volume-test: 
STEP: delete the pod
May  8 11:59:29.213: INFO: Waiting for pod pod-projected-configmaps-1a4e0c0d-7e99-4b7a-97ac-85e784be905f to disappear
May  8 11:59:29.227: INFO: Pod pod-projected-configmaps-1a4e0c0d-7e99-4b7a-97ac-85e784be905f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:59:29.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7946" for this suite.

• [SLOW TEST:5.292 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4070,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:59:29.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 11:59:29.391: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff" in namespace "downward-api-8703" to be "Succeeded or Failed"
May  8 11:59:29.425: INFO: Pod "downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 34.642773ms
May  8 11:59:31.429: INFO: Pod "downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038115217s
May  8 11:59:33.592: INFO: Pod "downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201373381s
May  8 11:59:35.596: INFO: Pod "downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205690251s
STEP: Saw pod success
May  8 11:59:35.597: INFO: Pod "downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff" satisfied condition "Succeeded or Failed"
May  8 11:59:35.600: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff container client-container: 
STEP: delete the pod
May  8 11:59:35.677: INFO: Waiting for pod downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff to disappear
May  8 11:59:35.683: INFO: Pod downwardapi-volume-7eb337a1-8d3d-4b76-89ed-1c06caf8e0ff no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 11:59:35.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8703" for this suite.

• [SLOW TEST:6.456 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4083,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 11:59:35.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-k8qv
STEP: Creating a pod to test atomic-volume-subpath
May  8 11:59:35.781: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k8qv" in namespace "subpath-9374" to be "Succeeded or Failed"
May  8 11:59:35.868: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Pending", Reason="", readiness=false. Elapsed: 86.953129ms
May  8 11:59:37.910: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128206898s
May  8 11:59:39.914: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 4.132991502s
May  8 11:59:41.928: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 6.146601557s
May  8 11:59:43.932: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 8.15049447s
May  8 11:59:45.936: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 10.154495477s
May  8 11:59:47.940: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 12.158687324s
May  8 11:59:49.944: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 14.162695311s
May  8 11:59:51.949: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 16.167585024s
May  8 11:59:53.953: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 18.171939906s
May  8 11:59:55.958: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 20.176436751s
May  8 11:59:57.962: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Running", Reason="", readiness=true. Elapsed: 22.180354646s
May  8 11:59:59.966: INFO: Pod "pod-subpath-test-configmap-k8qv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.184461367s
STEP: Saw pod success
May  8 11:59:59.966: INFO: Pod "pod-subpath-test-configmap-k8qv" satisfied condition "Succeeded or Failed"
May  8 11:59:59.969: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-k8qv container test-container-subpath-configmap-k8qv: 
STEP: delete the pod
May  8 12:00:00.020: INFO: Waiting for pod pod-subpath-test-configmap-k8qv to disappear
May  8 12:00:00.025: INFO: Pod pod-subpath-test-configmap-k8qv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-k8qv
May  8 12:00:00.025: INFO: Deleting pod "pod-subpath-test-configmap-k8qv" in namespace "subpath-9374"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:00:00.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9374" for this suite.

• [SLOW TEST:24.343 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":237,"skipped":4089,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:00:00.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-19
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-19
STEP: creating replication controller externalsvc in namespace services-19
I0508 12:00:00.360524       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-19, replica count: 2
I0508 12:00:03.411014       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 12:00:06.411281       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
May  8 12:00:06.453: INFO: Creating new exec pod
May  8 12:00:10.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-19 execpoddrkb5 -- /bin/sh -x -c nslookup clusterip-service'
May  8 12:00:10.776: INFO: stderr: "I0508 12:00:10.648524    4372 log.go:172] (0xc00072ab00) (0xc000617400) Create stream\nI0508 12:00:10.648634    4372 log.go:172] (0xc00072ab00) (0xc000617400) Stream added, broadcasting: 1\nI0508 12:00:10.651926    4372 log.go:172] (0xc00072ab00) Reply frame received for 1\nI0508 12:00:10.651965    4372 log.go:172] (0xc00072ab00) (0xc0008d0000) Create stream\nI0508 12:00:10.651975    4372 log.go:172] (0xc00072ab00) (0xc0008d0000) Stream added, broadcasting: 3\nI0508 12:00:10.653264    4372 log.go:172] (0xc00072ab00) Reply frame received for 3\nI0508 12:00:10.653295    4372 log.go:172] (0xc00072ab00) (0xc000502000) Create stream\nI0508 12:00:10.653303    4372 log.go:172] (0xc00072ab00) (0xc000502000) Stream added, broadcasting: 5\nI0508 12:00:10.654442    4372 log.go:172] (0xc00072ab00) Reply frame received for 5\nI0508 12:00:10.759670    4372 log.go:172] (0xc00072ab00) Data frame received for 5\nI0508 12:00:10.759691    4372 log.go:172] (0xc000502000) (5) Data frame handling\nI0508 12:00:10.759703    4372 log.go:172] (0xc000502000) (5) Data frame sent\n+ nslookup clusterip-service\nI0508 12:00:10.767995    4372 log.go:172] (0xc00072ab00) Data frame received for 3\nI0508 12:00:10.768012    4372 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0508 12:00:10.768028    4372 log.go:172] (0xc0008d0000) (3) Data frame sent\nI0508 12:00:10.768990    4372 log.go:172] (0xc00072ab00) Data frame received for 3\nI0508 12:00:10.769043    4372 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0508 12:00:10.769056    4372 log.go:172] (0xc0008d0000) (3) Data frame sent\nI0508 12:00:10.769449    4372 log.go:172] (0xc00072ab00) Data frame received for 5\nI0508 12:00:10.769472    4372 log.go:172] (0xc000502000) (5) Data frame handling\nI0508 12:00:10.769488    4372 log.go:172] (0xc00072ab00) Data frame received for 3\nI0508 12:00:10.769494    4372 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0508 12:00:10.771383    4372 log.go:172] (0xc00072ab00) Data frame received for 1\nI0508 12:00:10.771404    4372 log.go:172] (0xc000617400) (1) Data frame handling\nI0508 12:00:10.771425    4372 log.go:172] (0xc000617400) (1) Data frame sent\nI0508 12:00:10.771438    4372 log.go:172] (0xc00072ab00) (0xc000617400) Stream removed, broadcasting: 1\nI0508 12:00:10.771586    4372 log.go:172] (0xc00072ab00) Go away received\nI0508 12:00:10.771791    4372 log.go:172] (0xc00072ab00) (0xc000617400) Stream removed, broadcasting: 1\nI0508 12:00:10.771807    4372 log.go:172] (0xc00072ab00) (0xc0008d0000) Stream removed, broadcasting: 3\nI0508 12:00:10.771816    4372 log.go:172] (0xc00072ab00) (0xc000502000) Stream removed, broadcasting: 5\n"
May  8 12:00:10.776: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-19.svc.cluster.local\tcanonical name = externalsvc.services-19.svc.cluster.local.\nName:\texternalsvc.services-19.svc.cluster.local\nAddress: 10.104.28.16\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-19, will wait for the garbage collector to delete the pods
May  8 12:00:10.838: INFO: Deleting ReplicationController externalsvc took: 7.726189ms
May  8 12:00:11.138: INFO: Terminating ReplicationController externalsvc pods took: 300.231019ms
May  8 12:00:16.272: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:00:16.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-19" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:16.260 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":238,"skipped":4112,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:00:16.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:00:16.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2002" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":239,"skipped":4126,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:00:16.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 12:00:16.505: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3208e728-e7c8-404c-98a8-309ad686df31" in namespace "downward-api-2944" to be "Succeeded or Failed"
May  8 12:00:16.528: INFO: Pod "downwardapi-volume-3208e728-e7c8-404c-98a8-309ad686df31": Phase="Pending", Reason="", readiness=false. Elapsed: 23.427975ms
May  8 12:00:18.532: INFO: Pod "downwardapi-volume-3208e728-e7c8-404c-98a8-309ad686df31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027862777s
May  8 12:00:20.536: INFO: Pod "downwardapi-volume-3208e728-e7c8-404c-98a8-309ad686df31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031679187s
STEP: Saw pod success
May  8 12:00:20.536: INFO: Pod "downwardapi-volume-3208e728-e7c8-404c-98a8-309ad686df31" satisfied condition "Succeeded or Failed"
May  8 12:00:20.539: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-3208e728-e7c8-404c-98a8-309ad686df31 container client-container: 
STEP: delete the pod
May  8 12:00:20.573: INFO: Waiting for pod downwardapi-volume-3208e728-e7c8-404c-98a8-309ad686df31 to disappear
May  8 12:00:20.576: INFO: Pod downwardapi-volume-3208e728-e7c8-404c-98a8-309ad686df31 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:00:20.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2944" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4152,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:00:20.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-64f6ba09-b69c-4577-a256-83adc038cca4
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:00:20.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-930" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":241,"skipped":4182,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:00:20.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May  8 12:00:20.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:00:34.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9809" for this suite.

• [SLOW TEST:13.529 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":242,"skipped":4244,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:00:34.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 12:00:34.942: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 12:00:36.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724536034, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724536034, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724536035, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724536034, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 12:00:40.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 12:00:40.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3495-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:00:41.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7746" for this suite.
STEP: Destroying namespace "webhook-7746-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.007 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":243,"skipped":4262,"failed":0}
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:00:41.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-lksd9 in namespace proxy-2843
I0508 12:00:41.528051       7 runners.go:190] Created replication controller with name: proxy-service-lksd9, namespace: proxy-2843, replica count: 1
I0508 12:00:42.578646       7 runners.go:190] proxy-service-lksd9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 12:00:43.578822       7 runners.go:190] proxy-service-lksd9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 12:00:44.579068       7 runners.go:190] proxy-service-lksd9 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0508 12:00:45.579291       7 runners.go:190] proxy-service-lksd9 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0508 12:00:46.579526       7 runners.go:190] proxy-service-lksd9 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  8 12:00:46.602: INFO: setup took 5.256293657s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
May  8 12:00:46.637: INFO: (0) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 34.222675ms)
May  8 12:00:46.637: INFO: (0) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 34.197064ms)
May  8 12:00:46.637: INFO: (0) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 34.962427ms)
May  8 12:00:46.638: INFO: (0) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 35.11977ms)
May  8 12:00:46.638: INFO: (0) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 35.202611ms)
May  8 12:00:46.643: INFO: (0) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 40.520345ms)
May  8 12:00:46.644: INFO: (0) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 40.841906ms)
May  8 12:00:46.645: INFO: (0) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname2/proxy/: bar (200; 42.308306ms)
May  8 12:00:46.648: INFO: (0) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 44.981461ms)
May  8 12:00:46.648: INFO: (0) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 45.384502ms)
May  8 12:00:46.649: INFO: (0) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 45.952278ms)
May  8 12:00:46.655: INFO: (0) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 52.343566ms)
May  8 12:00:46.655: INFO: (0) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 52.191288ms)
May  8 12:00:46.655: INFO: (0) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 52.453097ms)
May  8 12:00:46.655: INFO: (0) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: ... (200; 14.134994ms)
May  8 12:00:46.669: INFO: (1) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 14.170632ms)
May  8 12:00:46.669: INFO: (1) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 14.363492ms)
May  8 12:00:46.670: INFO: (1) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 14.558387ms)
May  8 12:00:46.670: INFO: (1) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 15.02233ms)
May  8 12:00:46.670: INFO: (1) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test (200; 15.843423ms)
May  8 12:00:46.671: INFO: (1) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 15.930488ms)
May  8 12:00:46.672: INFO: (1) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 16.510393ms)
May  8 12:00:46.672: INFO: (1) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 16.538862ms)
May  8 12:00:46.672: INFO: (1) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 16.514385ms)
May  8 12:00:46.672: INFO: (1) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 16.590888ms)
May  8 12:00:46.672: INFO: (1) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname2/proxy/: bar (200; 16.582557ms)
May  8 12:00:46.672: INFO: (1) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 16.636579ms)
May  8 12:00:46.675: INFO: (2) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 3.683623ms)
May  8 12:00:46.677: INFO: (2) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 4.960859ms)
May  8 12:00:46.677: INFO: (2) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 4.964696ms)
May  8 12:00:46.677: INFO: (2) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 5.267526ms)
May  8 12:00:46.677: INFO: (2) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 5.308286ms)
May  8 12:00:46.677: INFO: (2) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test<... (200; 6.28579ms)
May  8 12:00:46.678: INFO: (2) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 6.45432ms)
May  8 12:00:46.678: INFO: (2) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 6.520362ms)
May  8 12:00:46.678: INFO: (2) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 6.594989ms)
May  8 12:00:46.679: INFO: (2) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 6.647214ms)
May  8 12:00:46.679: INFO: (2) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 6.645132ms)
May  8 12:00:46.679: INFO: (2) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 6.722854ms)
May  8 12:00:46.683: INFO: (3) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 3.931991ms)
May  8 12:00:46.683: INFO: (3) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 3.991721ms)
May  8 12:00:46.683: INFO: (3) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 3.945065ms)
May  8 12:00:46.683: INFO: (3) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 3.395703ms)
May  8 12:00:46.683: INFO: (3) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 3.747653ms)
May  8 12:00:46.683: INFO: (3) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 3.634059ms)
May  8 12:00:46.683: INFO: (3) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 4.707562ms)
May  8 12:00:46.683: INFO: (3) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test<... (200; 4.082486ms)
May  8 12:00:46.684: INFO: (3) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 4.297ms)
May  8 12:00:46.684: INFO: (3) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 4.552111ms)
May  8 12:00:46.684: INFO: (3) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 4.640973ms)
May  8 12:00:46.684: INFO: (3) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname2/proxy/: bar (200; 4.586985ms)
May  8 12:00:46.684: INFO: (3) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 5.172897ms)
May  8 12:00:46.689: INFO: (4) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 4.508188ms)
May  8 12:00:46.689: INFO: (4) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 4.613197ms)
May  8 12:00:46.689: INFO: (4) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 4.595784ms)
May  8 12:00:46.689: INFO: (4) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 4.656406ms)
May  8 12:00:46.690: INFO: (4) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 5.607418ms)
May  8 12:00:46.691: INFO: (4) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 6.396784ms)
May  8 12:00:46.692: INFO: (4) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname2/proxy/: bar (200; 7.117948ms)
May  8 12:00:46.692: INFO: (4) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 7.168031ms)
May  8 12:00:46.692: INFO: (4) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 7.048827ms)
May  8 12:00:46.694: INFO: (4) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 8.972666ms)
May  8 12:00:46.694: INFO: (4) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: ... (200; 10.372572ms)
May  8 12:00:46.696: INFO: (4) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 11.811722ms)
May  8 12:00:46.697: INFO: (4) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 12.415432ms)
May  8 12:00:46.697: INFO: (4) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 12.432799ms)
May  8 12:00:46.697: INFO: (4) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 12.62925ms)
May  8 12:00:46.704: INFO: (5) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 6.751416ms)
May  8 12:00:46.704: INFO: (5) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 6.994065ms)
May  8 12:00:46.767: INFO: (5) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 69.389321ms)
May  8 12:00:46.767: INFO: (5) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 69.345087ms)
May  8 12:00:46.767: INFO: (5) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 69.518923ms)
May  8 12:00:46.767: INFO: (5) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 69.54551ms)
May  8 12:00:46.767: INFO: (5) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname2/proxy/: bar (200; 69.871724ms)
May  8 12:00:46.767: INFO: (5) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 70.120173ms)
May  8 12:00:46.768: INFO: (5) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 70.216435ms)
May  8 12:00:46.768: INFO: (5) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 70.133157ms)
May  8 12:00:46.768: INFO: (5) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 70.267099ms)
May  8 12:00:46.768: INFO: (5) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 70.369489ms)
May  8 12:00:46.768: INFO: (5) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 70.471589ms)
May  8 12:00:46.768: INFO: (5) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test (200; 70.413801ms)
May  8 12:00:46.768: INFO: (5) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 70.57675ms)
May  8 12:00:46.772: INFO: (6) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 4.22327ms)
May  8 12:00:46.772: INFO: (6) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 4.37253ms)
May  8 12:00:46.772: INFO: (6) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 4.469478ms)
May  8 12:00:46.773: INFO: (6) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 4.697914ms)
May  8 12:00:46.773: INFO: (6) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 4.51982ms)
May  8 12:00:46.773: INFO: (6) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 4.77975ms)
May  8 12:00:46.773: INFO: (6) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 4.767149ms)
May  8 12:00:46.773: INFO: (6) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test<... (200; 3.588551ms)
May  8 12:00:46.779: INFO: (7) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 3.610497ms)
May  8 12:00:46.779: INFO: (7) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 3.822567ms)
May  8 12:00:46.779: INFO: (7) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 3.852628ms)
May  8 12:00:46.779: INFO: (7) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 3.79874ms)
May  8 12:00:46.779: INFO: (7) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 3.908347ms)
May  8 12:00:46.779: INFO: (7) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 3.765152ms)
May  8 12:00:46.779: INFO: (7) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 3.835325ms)
May  8 12:00:46.779: INFO: (7) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test (200; 3.765519ms)
May  8 12:00:46.789: INFO: (8) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 4.196914ms)
May  8 12:00:46.789: INFO: (8) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 4.390394ms)
May  8 12:00:46.789: INFO: (8) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 4.180663ms)
May  8 12:00:46.789: INFO: (8) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 4.280941ms)
May  8 12:00:46.789: INFO: (8) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test<... (200; 4.3959ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 4.510183ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 4.462555ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 4.464997ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 4.558038ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 4.556139ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 4.559038ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 4.59729ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 4.689979ms)
May  8 12:00:46.796: INFO: (9) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test<... (200; 2.734511ms)
May  8 12:00:46.801: INFO: (10) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname2/proxy/: bar (200; 4.320654ms)
May  8 12:00:46.801: INFO: (10) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 4.656664ms)
May  8 12:00:46.801: INFO: (10) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 4.960836ms)
May  8 12:00:46.801: INFO: (10) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 5.079369ms)
May  8 12:00:46.801: INFO: (10) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test (200; 5.514953ms)
May  8 12:00:46.802: INFO: (10) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 5.517617ms)
May  8 12:00:46.802: INFO: (10) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 5.594335ms)
May  8 12:00:46.802: INFO: (10) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 5.544519ms)
May  8 12:00:46.802: INFO: (10) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 5.527676ms)
May  8 12:00:46.805: INFO: (11) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 3.113648ms)
May  8 12:00:46.806: INFO: (11) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 3.711132ms)
May  8 12:00:46.806: INFO: (11) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 3.821406ms)
May  8 12:00:46.806: INFO: (11) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 3.799068ms)
May  8 12:00:46.806: INFO: (11) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 4.074944ms)
May  8 12:00:46.806: INFO: (11) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 4.092482ms)
May  8 12:00:46.807: INFO: (11) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 4.717016ms)
May  8 12:00:46.807: INFO: (11) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 4.783699ms)
May  8 12:00:46.807: INFO: (11) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 4.963819ms)
May  8 12:00:46.807: INFO: (11) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 4.946388ms)
May  8 12:00:46.807: INFO: (11) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: ... (200; 4.514541ms)
May  8 12:00:46.812: INFO: (12) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 4.481341ms)
May  8 12:00:46.812: INFO: (12) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 4.579742ms)
May  8 12:00:46.812: INFO: (12) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 4.577019ms)
May  8 12:00:46.812: INFO: (12) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test<... (200; 5.192965ms)
May  8 12:00:46.818: INFO: (13) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 5.227164ms)
May  8 12:00:46.818: INFO: (13) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 5.134411ms)
May  8 12:00:46.818: INFO: (13) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 5.439668ms)
May  8 12:00:46.818: INFO: (13) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 5.454704ms)
May  8 12:00:46.818: INFO: (13) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 5.449571ms)
May  8 12:00:46.822: INFO: (14) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 3.523385ms)
May  8 12:00:46.822: INFO: (14) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 3.915053ms)
May  8 12:00:46.822: INFO: (14) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 3.925679ms)
May  8 12:00:46.823: INFO: (14) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 4.892373ms)
May  8 12:00:46.823: INFO: (14) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 5.006886ms)
May  8 12:00:46.823: INFO: (14) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: ... (200; 8.695323ms)
May  8 12:00:46.834: INFO: (15) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 8.718821ms)
May  8 12:00:46.834: INFO: (15) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 8.990089ms)
May  8 12:00:46.834: INFO: (15) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test (200; 9.11983ms)
May  8 12:00:46.842: INFO: (16) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 8.069974ms)
May  8 12:00:46.842: INFO: (16) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 8.240671ms)
May  8 12:00:46.843: INFO: (16) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 8.508805ms)
May  8 12:00:46.843: INFO: (16) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 8.575176ms)
May  8 12:00:46.843: INFO: (16) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test<... (200; 8.646428ms)
May  8 12:00:46.843: INFO: (16) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 8.749747ms)
May  8 12:00:46.843: INFO: (16) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 8.951389ms)
May  8 12:00:46.843: INFO: (16) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 9.198267ms)
May  8 12:00:46.844: INFO: (16) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 9.377816ms)
May  8 12:00:46.844: INFO: (16) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 9.457959ms)
May  8 12:00:46.845: INFO: (16) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 10.759422ms)
May  8 12:00:46.845: INFO: (16) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 10.819903ms)
May  8 12:00:46.845: INFO: (16) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname2/proxy/: bar (200; 10.867377ms)
May  8 12:00:46.846: INFO: (16) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:160/proxy/: foo (200; 11.421845ms)
May  8 12:00:46.846: INFO: (16) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 11.486735ms)
May  8 12:00:46.849: INFO: (17) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 3.5529ms)
May  8 12:00:46.850: INFO: (17) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 3.709508ms)
May  8 12:00:46.850: INFO: (17) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:460/proxy/: tls baz (200; 3.661293ms)
May  8 12:00:46.850: INFO: (17) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 4.185368ms)
May  8 12:00:46.850: INFO: (17) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: ... (200; 5.978818ms)
May  8 12:00:46.852: INFO: (17) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 6.053441ms)
May  8 12:00:46.853: INFO: (17) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 7.114399ms)
May  8 12:00:46.853: INFO: (17) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 7.127179ms)
May  8 12:00:46.853: INFO: (17) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 7.287614ms)
May  8 12:00:46.853: INFO: (17) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 7.606253ms)
May  8 12:00:46.853: INFO: (17) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 7.491489ms)
May  8 12:00:46.855: INFO: (18) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 1.762336ms)
May  8 12:00:46.863: INFO: (18) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 9.94169ms)
May  8 12:00:46.864: INFO: (18) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 10.030037ms)
May  8 12:00:46.864: INFO: (18) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 10.012454ms)
May  8 12:00:46.864: INFO: (18) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 10.063026ms)
May  8 12:00:46.864: INFO: (18) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:1080/proxy/: test<... (200; 10.044418ms)
May  8 12:00:46.864: INFO: (18) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 10.107997ms)
May  8 12:00:46.864: INFO: (18) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 10.15209ms)
May  8 12:00:46.864: INFO: (18) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:443/proxy/: test<... (200; 2.882266ms)
May  8 12:00:46.867: INFO: (19) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:162/proxy/: bar (200; 2.730768ms)
May  8 12:00:46.867: INFO: (19) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47/proxy/: test (200; 3.230058ms)
May  8 12:00:46.867: INFO: (19) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:162/proxy/: bar (200; 3.668382ms)
May  8 12:00:46.868: INFO: (19) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname2/proxy/: bar (200; 4.596648ms)
May  8 12:00:46.869: INFO: (19) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname1/proxy/: foo (200; 3.896912ms)
May  8 12:00:46.869: INFO: (19) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname1/proxy/: tls baz (200; 5.02007ms)
May  8 12:00:46.869: INFO: (19) /api/v1/namespaces/proxy-2843/services/http:proxy-service-lksd9:portname2/proxy/: bar (200; 5.067531ms)
May  8 12:00:46.869: INFO: (19) /api/v1/namespaces/proxy-2843/pods/proxy-service-lksd9-wwp47:160/proxy/: foo (200; 4.0941ms)
May  8 12:00:46.869: INFO: (19) /api/v1/namespaces/proxy-2843/services/proxy-service-lksd9:portname1/proxy/: foo (200; 4.275369ms)
May  8 12:00:46.869: INFO: (19) /api/v1/namespaces/proxy-2843/pods/https:proxy-service-lksd9-wwp47:462/proxy/: tls qux (200; 4.651233ms)
May  8 12:00:46.869: INFO: (19) /api/v1/namespaces/proxy-2843/pods/http:proxy-service-lksd9-wwp47:1080/proxy/: ... (200; 4.900321ms)
May  8 12:00:46.869: INFO: (19) /api/v1/namespaces/proxy-2843/services/https:proxy-service-lksd9:tlsportname2/proxy/: tls qux (200; 5.0456ms)
STEP: deleting ReplicationController proxy-service-lksd9 in namespace proxy-2843, will wait for the garbage collector to delete the pods
May  8 12:00:46.928: INFO: Deleting ReplicationController proxy-service-lksd9 took: 6.905086ms
May  8 12:00:47.328: INFO: Terminating ReplicationController proxy-service-lksd9 pods took: 400.259548ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:00:53.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2843" for this suite.

• [SLOW TEST:12.487 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":244,"skipped":4262,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:00:53.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8699
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  8 12:00:53.809: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  8 12:00:53.923: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  8 12:00:55.928: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  8 12:00:57.927: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:00:59.928: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:01:01.928: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:01:03.927: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:01:05.928: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:01:07.927: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:01:09.927: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:01:11.927: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:01:13.927: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  8 12:01:15.928: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  8 12:01:15.934: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  8 12:01:19.955: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.90:8080/dial?request=hostname&protocol=http&host=10.244.2.37&port=8080&tries=1'] Namespace:pod-network-test-8699 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  8 12:01:19.955: INFO: >>> kubeConfig: /root/.kube/config
I0508 12:01:19.989618       7 log.go:172] (0xc004b1a4d0) (0xc000fd9b80) Create stream
I0508 12:01:19.989649       7 log.go:172] (0xc004b1a4d0) (0xc000fd9b80) Stream added, broadcasting: 1
I0508 12:01:19.991411       7 log.go:172] (0xc004b1a4d0) Reply frame received for 1
I0508 12:01:19.991444       7 log.go:172] (0xc004b1a4d0) (0xc00133a1e0) Create stream
I0508 12:01:19.991457       7 log.go:172] (0xc004b1a4d0) (0xc00133a1e0) Stream added, broadcasting: 3
I0508 12:01:19.992501       7 log.go:172] (0xc004b1a4d0) Reply frame received for 3
I0508 12:01:19.992537       7 log.go:172] (0xc004b1a4d0) (0xc0005b0dc0) Create stream
I0508 12:01:19.992552       7 log.go:172] (0xc004b1a4d0) (0xc0005b0dc0) Stream added, broadcasting: 5
I0508 12:01:19.993752       7 log.go:172] (0xc004b1a4d0) Reply frame received for 5
I0508 12:01:20.100826       7 log.go:172] (0xc004b1a4d0) Data frame received for 3
I0508 12:01:20.100858       7 log.go:172] (0xc00133a1e0) (3) Data frame handling
I0508 12:01:20.100880       7 log.go:172] (0xc00133a1e0) (3) Data frame sent
I0508 12:01:20.101862       7 log.go:172] (0xc004b1a4d0) Data frame received for 5
I0508 12:01:20.101885       7 log.go:172] (0xc0005b0dc0) (5) Data frame handling
I0508 12:01:20.101913       7 log.go:172] (0xc004b1a4d0) Data frame received for 3
I0508 12:01:20.101945       7 log.go:172] (0xc00133a1e0) (3) Data frame handling
I0508 12:01:20.103836       7 log.go:172] (0xc004b1a4d0) Data frame received for 1
I0508 12:01:20.103861       7 log.go:172] (0xc000fd9b80) (1) Data frame handling
I0508 12:01:20.103873       7 log.go:172] (0xc000fd9b80) (1) Data frame sent
I0508 12:01:20.103887       7 log.go:172] (0xc004b1a4d0) (0xc000fd9b80) Stream removed, broadcasting: 1
I0508 12:01:20.103968       7 log.go:172] (0xc004b1a4d0) (0xc000fd9b80) Stream removed, broadcasting: 1
I0508 12:01:20.103983       7 log.go:172] (0xc004b1a4d0) (0xc00133a1e0) Stream removed, broadcasting: 3
I0508 12:01:20.104001       7 log.go:172] (0xc004b1a4d0) (0xc0005b0dc0) Stream removed, broadcasting: 5
May  8 12:01:20.104: INFO: Waiting for responses: map[]
I0508 12:01:20.104303       7 log.go:172] (0xc004b1a4d0) Go away received
May  8 12:01:20.107: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.90:8080/dial?request=hostname&protocol=http&host=10.244.1.89&port=8080&tries=1'] Namespace:pod-network-test-8699 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  8 12:01:20.107: INFO: >>> kubeConfig: /root/.kube/config
I0508 12:01:20.134466       7 log.go:172] (0xc004b1abb0) (0xc002bb8320) Create stream
I0508 12:01:20.134496       7 log.go:172] (0xc004b1abb0) (0xc002bb8320) Stream added, broadcasting: 1
I0508 12:01:20.136401       7 log.go:172] (0xc004b1abb0) Reply frame received for 1
I0508 12:01:20.136455       7 log.go:172] (0xc004b1abb0) (0xc0009f8320) Create stream
I0508 12:01:20.136479       7 log.go:172] (0xc004b1abb0) (0xc0009f8320) Stream added, broadcasting: 3
I0508 12:01:20.137585       7 log.go:172] (0xc004b1abb0) Reply frame received for 3
I0508 12:01:20.137620       7 log.go:172] (0xc004b1abb0) (0xc002bb85a0) Create stream
I0508 12:01:20.137633       7 log.go:172] (0xc004b1abb0) (0xc002bb85a0) Stream added, broadcasting: 5
I0508 12:01:20.138665       7 log.go:172] (0xc004b1abb0) Reply frame received for 5
I0508 12:01:20.211318       7 log.go:172] (0xc004b1abb0) Data frame received for 3
I0508 12:01:20.211353       7 log.go:172] (0xc0009f8320) (3) Data frame handling
I0508 12:01:20.211378       7 log.go:172] (0xc0009f8320) (3) Data frame sent
I0508 12:01:20.211821       7 log.go:172] (0xc004b1abb0) Data frame received for 5
I0508 12:01:20.211869       7 log.go:172] (0xc002bb85a0) (5) Data frame handling
I0508 12:01:20.211911       7 log.go:172] (0xc004b1abb0) Data frame received for 3
I0508 12:01:20.211937       7 log.go:172] (0xc0009f8320) (3) Data frame handling
I0508 12:01:20.213727       7 log.go:172] (0xc004b1abb0) Data frame received for 1
I0508 12:01:20.213786       7 log.go:172] (0xc002bb8320) (1) Data frame handling
I0508 12:01:20.213829       7 log.go:172] (0xc002bb8320) (1) Data frame sent
I0508 12:01:20.213911       7 log.go:172] (0xc004b1abb0) (0xc002bb8320) Stream removed, broadcasting: 1
I0508 12:01:20.214019       7 log.go:172] (0xc004b1abb0) (0xc002bb8320) Stream removed, broadcasting: 1
I0508 12:01:20.214047       7 log.go:172] (0xc004b1abb0) (0xc0009f8320) Stream removed, broadcasting: 3
I0508 12:01:20.214066       7 log.go:172] (0xc004b1abb0) (0xc002bb85a0) Stream removed, broadcasting: 5
May  8 12:01:20.214: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
I0508 12:01:20.214169       7 log.go:172] (0xc004b1abb0) Go away received
May  8 12:01:20.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8699" for this suite.

• [SLOW TEST:26.484 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4269,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:01:20.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May  8 12:01:20.434: INFO: Waiting up to 5m0s for pod "pod-824cc4c9-e5a9-444c-9a01-a1fae6e8da4e" in namespace "emptydir-9971" to be "Succeeded or Failed"
May  8 12:01:20.470: INFO: Pod "pod-824cc4c9-e5a9-444c-9a01-a1fae6e8da4e": Phase="Pending", Reason="", readiness=false. Elapsed: 36.674131ms
May  8 12:01:22.475: INFO: Pod "pod-824cc4c9-e5a9-444c-9a01-a1fae6e8da4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041164513s
May  8 12:01:24.479: INFO: Pod "pod-824cc4c9-e5a9-444c-9a01-a1fae6e8da4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045518124s
STEP: Saw pod success
May  8 12:01:24.479: INFO: Pod "pod-824cc4c9-e5a9-444c-9a01-a1fae6e8da4e" satisfied condition "Succeeded or Failed"
May  8 12:01:24.482: INFO: Trying to get logs from node kali-worker pod pod-824cc4c9-e5a9-444c-9a01-a1fae6e8da4e container test-container: 
STEP: delete the pod
May  8 12:01:24.520: INFO: Waiting for pod pod-824cc4c9-e5a9-444c-9a01-a1fae6e8da4e to disappear
May  8 12:01:24.575: INFO: Pod pod-824cc4c9-e5a9-444c-9a01-a1fae6e8da4e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:01:24.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9971" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4299,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:01:24.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  8 12:01:24.665: INFO: Waiting up to 5m0s for pod "downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949" in namespace "downward-api-4938" to be "Succeeded or Failed"
May  8 12:01:24.713: INFO: Pod "downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949": Phase="Pending", Reason="", readiness=false. Elapsed: 47.538076ms
May  8 12:01:26.718: INFO: Pod "downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052202933s
May  8 12:01:28.797: INFO: Pod "downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131840363s
May  8 12:01:30.802: INFO: Pod "downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136534129s
STEP: Saw pod success
May  8 12:01:30.802: INFO: Pod "downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949" satisfied condition "Succeeded or Failed"
May  8 12:01:30.805: INFO: Trying to get logs from node kali-worker2 pod downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949 container dapi-container: 
STEP: delete the pod
May  8 12:01:30.852: INFO: Waiting for pod downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949 to disappear
May  8 12:01:30.862: INFO: Pod downward-api-c1cd3c51-dfe0-492c-a9e7-e8990cb79949 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:01:30.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4938" for this suite.

• [SLOW TEST:6.284 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4325,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:01:30.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May  8 12:01:35.525: INFO: Successfully updated pod "pod-update-activedeadlineseconds-93cbc750-9e5f-4835-bbb4-f2acb9b26cb6"
May  8 12:01:35.525: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-93cbc750-9e5f-4835-bbb4-f2acb9b26cb6" in namespace "pods-2594" to be "terminated due to deadline exceeded"
May  8 12:01:35.542: INFO: Pod "pod-update-activedeadlineseconds-93cbc750-9e5f-4835-bbb4-f2acb9b26cb6": Phase="Running", Reason="", readiness=true. Elapsed: 17.273304ms
May  8 12:01:37.547: INFO: Pod "pod-update-activedeadlineseconds-93cbc750-9e5f-4835-bbb4-f2acb9b26cb6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021828149s
May  8 12:01:37.547: INFO: Pod "pod-update-activedeadlineseconds-93cbc750-9e5f-4835-bbb4-f2acb9b26cb6" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:01:37.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2594" for this suite.

• [SLOW TEST:6.685 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4348,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:01:37.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 12:01:37.643: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e4a6a16-fd6f-4d3e-bc73-e4810e3fd7a6" in namespace "projected-9711" to be "Succeeded or Failed"
May  8 12:01:37.646: INFO: Pod "downwardapi-volume-9e4a6a16-fd6f-4d3e-bc73-e4810e3fd7a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91279ms
May  8 12:01:39.650: INFO: Pod "downwardapi-volume-9e4a6a16-fd6f-4d3e-bc73-e4810e3fd7a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0073997s
May  8 12:01:41.654: INFO: Pod "downwardapi-volume-9e4a6a16-fd6f-4d3e-bc73-e4810e3fd7a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011393597s
STEP: Saw pod success
May  8 12:01:41.654: INFO: Pod "downwardapi-volume-9e4a6a16-fd6f-4d3e-bc73-e4810e3fd7a6" satisfied condition "Succeeded or Failed"
May  8 12:01:41.656: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9e4a6a16-fd6f-4d3e-bc73-e4810e3fd7a6 container client-container: 
STEP: delete the pod
May  8 12:01:41.740: INFO: Waiting for pod downwardapi-volume-9e4a6a16-fd6f-4d3e-bc73-e4810e3fd7a6 to disappear
May  8 12:01:41.748: INFO: Pod downwardapi-volume-9e4a6a16-fd6f-4d3e-bc73-e4810e3fd7a6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:01:41.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9711" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4358,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:01:41.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:01:48.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4145" for this suite.
STEP: Destroying namespace "nsdeletetest-86" for this suite.
May  8 12:01:48.228: INFO: Namespace nsdeletetest-86 was already deleted
STEP: Destroying namespace "nsdeletetest-4118" for this suite.

• [SLOW TEST:6.451 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":250,"skipped":4366,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:01:48.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  8 12:01:48.663: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  8 12:01:50.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724536108, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724536108, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724536108, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724536108, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  8 12:01:53.803: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 12:01:53.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:01:55.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2681" for this suite.
STEP: Destroying namespace "webhook-2681-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.196 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":251,"skipped":4376,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:01:55.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-7d2e9890-981d-4a05-96b7-a9a26a8d99dd
STEP: Creating secret with name s-test-opt-upd-d16f8d5b-6550-439e-9dab-8a3a09661467
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-7d2e9890-981d-4a05-96b7-a9a26a8d99dd
STEP: Updating secret s-test-opt-upd-d16f8d5b-6550-439e-9dab-8a3a09661467
STEP: Creating secret with name s-test-opt-create-4b70bd74-2ac4-48a4-8ca7-39e62486c3a5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:03:16.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2219" for this suite.

• [SLOW TEST:80.775 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4377,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:03:16.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 12:03:16.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  8 12:03:19.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1264 create -f -'
May  8 12:03:23.295: INFO: stderr: ""
May  8 12:03:23.295: INFO: stdout: "e2e-test-crd-publish-openapi-3172-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May  8 12:03:23.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1264 delete e2e-test-crd-publish-openapi-3172-crds test-cr'
May  8 12:03:23.420: INFO: stderr: ""
May  8 12:03:23.420: INFO: stdout: "e2e-test-crd-publish-openapi-3172-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
May  8 12:03:23.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1264 apply -f -'
May  8 12:03:23.785: INFO: stderr: ""
May  8 12:03:23.785: INFO: stdout: "e2e-test-crd-publish-openapi-3172-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May  8 12:03:23.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1264 delete e2e-test-crd-publish-openapi-3172-crds test-cr'
May  8 12:03:23.950: INFO: stderr: ""
May  8 12:03:23.950: INFO: stdout: "e2e-test-crd-publish-openapi-3172-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
May  8 12:03:23.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3172-crds'
May  8 12:03:24.374: INFO: stderr: ""
May  8 12:03:24.374: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3172-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:03:26.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1264" for this suite.

• [SLOW TEST:10.108 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":253,"skipped":4396,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:03:26.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 12:03:26.573: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
May  8 12:03:26.583: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:26.588: INFO: Number of nodes with available pods: 0
May  8 12:03:26.588: INFO: Node kali-worker is running more than one daemon pod
May  8 12:03:27.593: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:27.597: INFO: Number of nodes with available pods: 0
May  8 12:03:27.597: INFO: Node kali-worker is running more than one daemon pod
May  8 12:03:28.594: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:28.598: INFO: Number of nodes with available pods: 0
May  8 12:03:28.598: INFO: Node kali-worker is running more than one daemon pod
May  8 12:03:29.614: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:29.618: INFO: Number of nodes with available pods: 0
May  8 12:03:29.618: INFO: Node kali-worker is running more than one daemon pod
May  8 12:03:30.614: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:30.618: INFO: Number of nodes with available pods: 2
May  8 12:03:30.618: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
May  8 12:03:30.663: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:30.663: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:30.707: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:31.711: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:31.711: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:31.714: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:32.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:32.713: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:32.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:33.713: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:33.713: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:33.717: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:34.711: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:34.711: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:34.711: INFO: Pod daemon-set-shblq is not available
May  8 12:03:34.715: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:35.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:35.712: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:35.712: INFO: Pod daemon-set-shblq is not available
May  8 12:03:35.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:36.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:36.712: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:36.712: INFO: Pod daemon-set-shblq is not available
May  8 12:03:36.717: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:37.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:37.712: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:37.712: INFO: Pod daemon-set-shblq is not available
May  8 12:03:37.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:38.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:38.712: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:38.712: INFO: Pod daemon-set-shblq is not available
May  8 12:03:38.717: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:39.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:39.712: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:39.712: INFO: Pod daemon-set-shblq is not available
May  8 12:03:39.717: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:40.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:40.713: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:40.713: INFO: Pod daemon-set-shblq is not available
May  8 12:03:40.717: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:41.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:41.712: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:41.712: INFO: Pod daemon-set-shblq is not available
May  8 12:03:41.717: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:42.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:42.712: INFO: Wrong image for pod: daemon-set-shblq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:42.712: INFO: Pod daemon-set-shblq is not available
May  8 12:03:42.718: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:43.726: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:43.726: INFO: Pod daemon-set-jvfnb is not available
May  8 12:03:43.731: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:44.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:44.712: INFO: Pod daemon-set-jvfnb is not available
May  8 12:03:44.720: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:45.711: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:45.711: INFO: Pod daemon-set-jvfnb is not available
May  8 12:03:45.715: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:46.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:46.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:47.711: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:47.715: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:48.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:48.712: INFO: Pod daemon-set-4gwrm is not available
May  8 12:03:48.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:49.715: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:49.715: INFO: Pod daemon-set-4gwrm is not available
May  8 12:03:49.719: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:50.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:50.712: INFO: Pod daemon-set-4gwrm is not available
May  8 12:03:50.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:51.711: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:51.711: INFO: Pod daemon-set-4gwrm is not available
May  8 12:03:51.715: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:52.712: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:52.712: INFO: Pod daemon-set-4gwrm is not available
May  8 12:03:52.717: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:53.751: INFO: Wrong image for pod: daemon-set-4gwrm. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  8 12:03:53.751: INFO: Pod daemon-set-4gwrm is not available
May  8 12:03:53.782: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:54.712: INFO: Pod daemon-set-85rbf is not available
May  8 12:03:54.716: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
May  8 12:03:54.720: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:54.722: INFO: Number of nodes with available pods: 1
May  8 12:03:54.722: INFO: Node kali-worker is running more than one daemon pod
May  8 12:03:55.729: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:55.733: INFO: Number of nodes with available pods: 1
May  8 12:03:55.733: INFO: Node kali-worker is running more than one daemon pod
May  8 12:03:56.728: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:03:56.732: INFO: Number of nodes with available pods: 2
May  8 12:03:56.732: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6047, will wait for the garbage collector to delete the pods
May  8 12:03:56.807: INFO: Deleting DaemonSet.extensions daemon-set took: 6.353798ms
May  8 12:03:57.108: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.343008ms
May  8 12:04:03.811: INFO: Number of nodes with available pods: 0
May  8 12:04:03.811: INFO: Number of running nodes: 0, number of available pods: 0
May  8 12:04:03.814: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6047/daemonsets","resourceVersion":"2586614"},"items":null}

May  8 12:04:03.816: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6047/pods","resourceVersion":"2586614"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:04:03.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6047" for this suite.

• [SLOW TEST:37.519 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":254,"skipped":4398,"failed":0}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:04:03.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  8 12:04:03.890: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  8 12:04:03.939: INFO: Waiting for terminating namespaces to be deleted...
May  8 12:04:03.941: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  8 12:04:03.956: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 12:04:03.956: INFO: 	Container kindnet-cni ready: true, restart count 1
May  8 12:04:03.956: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 12:04:03.956: INFO: 	Container kube-proxy ready: true, restart count 0
May  8 12:04:03.956: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  8 12:04:03.961: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 12:04:03.961: INFO: 	Container kindnet-cni ready: true, restart count 0
May  8 12:04:03.961: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 12:04:03.961: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-a087f8d5-e35b-4f56-b398-25e831ec48a3 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-a087f8d5-e35b-4f56-b398-25e831ec48a3 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-a087f8d5-e35b-4f56-b398-25e831ec48a3
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:09:14.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1696" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:310.375 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":255,"skipped":4408,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:09:14.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
May  8 12:09:14.280: INFO: >>> kubeConfig: /root/.kube/config
May  8 12:09:16.234: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:09:27.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5784" for this suite.

• [SLOW TEST:13.763 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":256,"skipped":4413,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:09:27.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-bhfp
STEP: Creating a pod to test atomic-volume-subpath
May  8 12:09:28.064: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bhfp" in namespace "subpath-9513" to be "Succeeded or Failed"
May  8 12:09:28.098: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Pending", Reason="", readiness=false. Elapsed: 33.720608ms
May  8 12:09:30.101: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037323859s
May  8 12:09:32.116: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 4.051601589s
May  8 12:09:34.119: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 6.054778876s
May  8 12:09:36.122: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 8.058384014s
May  8 12:09:38.131: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 10.066597723s
May  8 12:09:40.134: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 12.069862464s
May  8 12:09:42.166: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 14.102199657s
May  8 12:09:44.192: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 16.127852685s
May  8 12:09:46.195: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 18.131448208s
May  8 12:09:48.200: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 20.135767727s
May  8 12:09:50.204: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Running", Reason="", readiness=true. Elapsed: 22.139863001s
May  8 12:09:52.208: INFO: Pod "pod-subpath-test-configmap-bhfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.144023508s
STEP: Saw pod success
May  8 12:09:52.208: INFO: Pod "pod-subpath-test-configmap-bhfp" satisfied condition "Succeeded or Failed"
May  8 12:09:52.211: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-bhfp container test-container-subpath-configmap-bhfp: 
STEP: delete the pod
May  8 12:09:52.242: INFO: Waiting for pod pod-subpath-test-configmap-bhfp to disappear
May  8 12:09:52.246: INFO: Pod pod-subpath-test-configmap-bhfp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-bhfp
May  8 12:09:52.246: INFO: Deleting pod "pod-subpath-test-configmap-bhfp" in namespace "subpath-9513"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:09:52.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9513" for this suite.

• [SLOW TEST:24.314 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":257,"skipped":4423,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:09:52.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-vksw
STEP: Creating a pod to test atomic-volume-subpath
May  8 12:09:52.372: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vksw" in namespace "subpath-9052" to be "Succeeded or Failed"
May  8 12:09:52.393: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Pending", Reason="", readiness=false. Elapsed: 21.164664ms
May  8 12:09:54.412: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039575019s
May  8 12:09:56.418: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 4.0464708s
May  8 12:09:58.422: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 6.050472215s
May  8 12:10:00.426: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 8.053855432s
May  8 12:10:02.434: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 10.062190741s
May  8 12:10:04.446: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 12.07392937s
May  8 12:10:06.450: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 14.078072762s
May  8 12:10:08.455: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 16.082581382s
May  8 12:10:10.459: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 18.086816219s
May  8 12:10:12.463: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 20.091438877s
May  8 12:10:14.469: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Running", Reason="", readiness=true. Elapsed: 22.096871935s
May  8 12:10:16.473: INFO: Pod "pod-subpath-test-downwardapi-vksw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.100623128s
STEP: Saw pod success
May  8 12:10:16.473: INFO: Pod "pod-subpath-test-downwardapi-vksw" satisfied condition "Succeeded or Failed"
May  8 12:10:16.475: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-vksw container test-container-subpath-downwardapi-vksw: 
STEP: delete the pod
May  8 12:10:16.544: INFO: Waiting for pod pod-subpath-test-downwardapi-vksw to disappear
May  8 12:10:16.646: INFO: Pod pod-subpath-test-downwardapi-vksw no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-vksw
May  8 12:10:16.646: INFO: Deleting pod "pod-subpath-test-downwardapi-vksw" in namespace "subpath-9052"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:10:16.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9052" for this suite.

• [SLOW TEST:24.382 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":258,"skipped":4425,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:10:16.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
May  8 12:10:17.242: INFO: created pod pod-service-account-defaultsa
May  8 12:10:17.242: INFO: pod pod-service-account-defaultsa service account token volume mount: true
May  8 12:10:17.248: INFO: created pod pod-service-account-mountsa
May  8 12:10:17.248: INFO: pod pod-service-account-mountsa service account token volume mount: true
May  8 12:10:17.278: INFO: created pod pod-service-account-nomountsa
May  8 12:10:17.278: INFO: pod pod-service-account-nomountsa service account token volume mount: false
May  8 12:10:17.346: INFO: created pod pod-service-account-defaultsa-mountspec
May  8 12:10:17.346: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
May  8 12:10:17.374: INFO: created pod pod-service-account-mountsa-mountspec
May  8 12:10:17.374: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
May  8 12:10:17.434: INFO: created pod pod-service-account-nomountsa-mountspec
May  8 12:10:17.434: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
May  8 12:10:17.513: INFO: created pod pod-service-account-defaultsa-nomountspec
May  8 12:10:17.513: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
May  8 12:10:17.554: INFO: created pod pod-service-account-mountsa-nomountspec
May  8 12:10:17.554: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
May  8 12:10:17.652: INFO: created pod pod-service-account-nomountsa-nomountspec
May  8 12:10:17.652: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:10:17.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9074" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":259,"skipped":4443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:10:17.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-d184fb9a-4c2f-4d87-a3b4-ca586dae06e9
STEP: Creating a pod to test consume configMaps
May  8 12:10:18.010: INFO: Waiting up to 5m0s for pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61" in namespace "configmap-4264" to be "Succeeded or Failed"
May  8 12:10:18.013: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.695356ms
May  8 12:10:20.097: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087183775s
May  8 12:10:22.340: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330013905s
May  8 12:10:24.843: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.83296186s
May  8 12:10:26.963: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953199336s
May  8 12:10:29.046: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61": Phase="Pending", Reason="", readiness=false. Elapsed: 11.036279399s
May  8 12:10:31.057: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61": Phase="Running", Reason="", readiness=true. Elapsed: 13.046850115s
May  8 12:10:33.061: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.051253014s
STEP: Saw pod success
May  8 12:10:33.061: INFO: Pod "pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61" satisfied condition "Succeeded or Failed"
May  8 12:10:33.064: INFO: Trying to get logs from node kali-worker pod pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61 container configmap-volume-test: 
STEP: delete the pod
May  8 12:10:33.097: INFO: Waiting for pod pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61 to disappear
May  8 12:10:33.110: INFO: Pod pod-configmaps-10476b74-1b83-42af-b266-cf8467cc3f61 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:10:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4264" for this suite.

• [SLOW TEST:15.411 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4480,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:10:33.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  8 12:10:33.226: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  8 12:10:33.250: INFO: Waiting for terminating namespaces to be deleted...
May  8 12:10:33.253: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  8 12:10:33.257: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 12:10:33.257: INFO: 	Container kindnet-cni ready: true, restart count 1
May  8 12:10:33.257: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 12:10:33.258: INFO: 	Container kube-proxy ready: true, restart count 0
May  8 12:10:33.258: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  8 12:10:33.262: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 12:10:33.262: INFO: 	Container kindnet-cni ready: true, restart count 0
May  8 12:10:33.262: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  8 12:10:33.262: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
May  8 12:10:33.356: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker
May  8 12:10:33.356: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2
May  8 12:10:33.356: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2
May  8 12:10:33.356: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
May  8 12:10:33.356: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
May  8 12:10:33.363: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36d2f6fd-5895-4163-8129-8b0b7a1f0379.160d0c5c232bdb51], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5893/filler-pod-36d2f6fd-5895-4163-8129-8b0b7a1f0379 to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36d2f6fd-5895-4163-8129-8b0b7a1f0379.160d0c5c71d4f509], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36d2f6fd-5895-4163-8129-8b0b7a1f0379.160d0c5ce4613973], Reason = [Created], Message = [Created container filler-pod-36d2f6fd-5895-4163-8129-8b0b7a1f0379]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36d2f6fd-5895-4163-8129-8b0b7a1f0379.160d0c5d14b88171], Reason = [Started], Message = [Started container filler-pod-36d2f6fd-5895-4163-8129-8b0b7a1f0379]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7277d3fa-168d-47a0-a8bf-7e023f7cc44a.160d0c5c25488491], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5893/filler-pod-7277d3fa-168d-47a0-a8bf-7e023f7cc44a to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7277d3fa-168d-47a0-a8bf-7e023f7cc44a.160d0c5cb0d3f9be], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7277d3fa-168d-47a0-a8bf-7e023f7cc44a.160d0c5d26ec89d5], Reason = [Created], Message = [Created container filler-pod-7277d3fa-168d-47a0-a8bf-7e023f7cc44a]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7277d3fa-168d-47a0-a8bf-7e023f7cc44a.160d0c5d461e6f8b], Reason = [Started], Message = [Started container filler-pod-7277d3fa-168d-47a0-a8bf-7e023f7cc44a]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.160d0c5d8dd8492e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:10:40.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5893" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:7.472 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":261,"skipped":4500,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:10:40.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-a5784ec6-5e3b-4f2d-baac-ac74e06e85b9
STEP: Creating a pod to test consume configMaps
May  8 12:10:40.688: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aed4320d-cf71-40a1-a383-7a63e2485fdf" in namespace "projected-8349" to be "Succeeded or Failed"
May  8 12:10:40.733: INFO: Pod "pod-projected-configmaps-aed4320d-cf71-40a1-a383-7a63e2485fdf": Phase="Pending", Reason="", readiness=false. Elapsed: 45.05823ms
May  8 12:10:42.760: INFO: Pod "pod-projected-configmaps-aed4320d-cf71-40a1-a383-7a63e2485fdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071514923s
May  8 12:10:44.819: INFO: Pod "pod-projected-configmaps-aed4320d-cf71-40a1-a383-7a63e2485fdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131203454s
STEP: Saw pod success
May  8 12:10:44.819: INFO: Pod "pod-projected-configmaps-aed4320d-cf71-40a1-a383-7a63e2485fdf" satisfied condition "Succeeded or Failed"
May  8 12:10:44.822: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-aed4320d-cf71-40a1-a383-7a63e2485fdf container projected-configmap-volume-test: 
STEP: delete the pod
May  8 12:10:44.873: INFO: Waiting for pod pod-projected-configmaps-aed4320d-cf71-40a1-a383-7a63e2485fdf to disappear
May  8 12:10:44.967: INFO: Pod pod-projected-configmaps-aed4320d-cf71-40a1-a383-7a63e2485fdf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:10:44.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8349" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4525,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:10:44.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
May  8 12:10:45.155: INFO: Waiting up to 5m0s for pod "pod-bc7e22f0-b95b-4dab-96d0-530b0e9bd0e2" in namespace "emptydir-5711" to be "Succeeded or Failed"
May  8 12:10:45.171: INFO: Pod "pod-bc7e22f0-b95b-4dab-96d0-530b0e9bd0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.059446ms
May  8 12:10:47.263: INFO: Pod "pod-bc7e22f0-b95b-4dab-96d0-530b0e9bd0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107393593s
May  8 12:10:49.267: INFO: Pod "pod-bc7e22f0-b95b-4dab-96d0-530b0e9bd0e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111667121s
STEP: Saw pod success
May  8 12:10:49.267: INFO: Pod "pod-bc7e22f0-b95b-4dab-96d0-530b0e9bd0e2" satisfied condition "Succeeded or Failed"
May  8 12:10:49.270: INFO: Trying to get logs from node kali-worker pod pod-bc7e22f0-b95b-4dab-96d0-530b0e9bd0e2 container test-container: 
STEP: delete the pod
May  8 12:10:49.339: INFO: Waiting for pod pod-bc7e22f0-b95b-4dab-96d0-530b0e9bd0e2 to disappear
May  8 12:10:49.364: INFO: Pod pod-bc7e22f0-b95b-4dab-96d0-530b0e9bd0e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:10:49.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5711" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4525,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:10:49.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May  8 12:10:57.576: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  8 12:10:57.590: INFO: Pod pod-with-poststart-http-hook still exists
May  8 12:10:59.591: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  8 12:10:59.596: INFO: Pod pod-with-poststart-http-hook still exists
May  8 12:11:01.591: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  8 12:11:01.596: INFO: Pod pod-with-poststart-http-hook still exists
May  8 12:11:03.591: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  8 12:11:03.595: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:11:03.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8821" for this suite.

• [SLOW TEST:14.229 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4554,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:11:03.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
May  8 12:11:04.276: INFO: Waiting up to 5m0s for pod "var-expansion-028f5ad3-3652-4c65-a217-0e26762d124b" in namespace "var-expansion-5600" to be "Succeeded or Failed"
May  8 12:11:04.328: INFO: Pod "var-expansion-028f5ad3-3652-4c65-a217-0e26762d124b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.338593ms
May  8 12:11:06.394: INFO: Pod "var-expansion-028f5ad3-3652-4c65-a217-0e26762d124b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11805655s
May  8 12:11:08.496: INFO: Pod "var-expansion-028f5ad3-3652-4c65-a217-0e26762d124b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.219962265s
STEP: Saw pod success
May  8 12:11:08.496: INFO: Pod "var-expansion-028f5ad3-3652-4c65-a217-0e26762d124b" satisfied condition "Succeeded or Failed"
May  8 12:11:08.500: INFO: Trying to get logs from node kali-worker2 pod var-expansion-028f5ad3-3652-4c65-a217-0e26762d124b container dapi-container: 
STEP: delete the pod
May  8 12:11:09.221: INFO: Waiting for pod var-expansion-028f5ad3-3652-4c65-a217-0e26762d124b to disappear
May  8 12:11:09.238: INFO: Pod var-expansion-028f5ad3-3652-4c65-a217-0e26762d124b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:11:09.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5600" for this suite.

• [SLOW TEST:5.651 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4588,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:11:09.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-418328fb-ec2a-4221-b0a7-801423cb529e
STEP: Creating a pod to test consume secrets
May  8 12:11:09.382: INFO: Waiting up to 5m0s for pod "pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407" in namespace "secrets-9725" to be "Succeeded or Failed"
May  8 12:11:09.437: INFO: Pod "pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407": Phase="Pending", Reason="", readiness=false. Elapsed: 54.356532ms
May  8 12:11:11.441: INFO: Pod "pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058956278s
May  8 12:11:13.446: INFO: Pod "pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407": Phase="Running", Reason="", readiness=true. Elapsed: 4.063872723s
May  8 12:11:15.451: INFO: Pod "pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068555237s
STEP: Saw pod success
May  8 12:11:15.451: INFO: Pod "pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407" satisfied condition "Succeeded or Failed"
May  8 12:11:15.454: INFO: Trying to get logs from node kali-worker pod pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407 container secret-volume-test: 
STEP: delete the pod
May  8 12:11:15.534: INFO: Waiting for pod pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407 to disappear
May  8 12:11:15.543: INFO: Pod pod-secrets-2fc387cd-8e03-4247-991a-c7fb45f1c407 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:11:15.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9725" for this suite.

• [SLOW TEST:6.306 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4591,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:11:15.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-b3a88efc-6be9-4449-9364-89bdda2945ae
STEP: Creating configMap with name cm-test-opt-upd-b1090408-5838-48fe-a907-5e5894974ac0
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b3a88efc-6be9-4449-9364-89bdda2945ae
STEP: Updating configmap cm-test-opt-upd-b1090408-5838-48fe-a907-5e5894974ac0
STEP: Creating configMap with name cm-test-opt-create-430ad8a1-0423-4362-aa67-d80d5a170e79
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:11:23.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4265" for this suite.

• [SLOW TEST:8.299 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4592,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:11:23.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:11:27.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7971" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4603,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:11:27.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-5055c1b9-0f2d-40d6-ab70-851761eacfc0 in namespace container-probe-2924
May  8 12:11:32.142: INFO: Started pod busybox-5055c1b9-0f2d-40d6-ab70-851761eacfc0 in namespace container-probe-2924
STEP: checking the pod's current state and verifying that restartCount is present
May  8 12:11:32.144: INFO: Initial restart count of pod busybox-5055c1b9-0f2d-40d6-ab70-851761eacfc0 is 0
May  8 12:12:26.265: INFO: Restart count of pod container-probe-2924/busybox-5055c1b9-0f2d-40d6-ab70-851761eacfc0 is now 1 (54.119985366s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:12:26.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2924" for this suite.

• [SLOW TEST:58.341 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4644,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:12:26.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-bef09c98-fe90-4579-876e-1b5795cb256c
STEP: Creating a pod to test consume configMaps
May  8 12:12:26.392: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d266f5b1-0869-4ecc-a7b6-a6d7851c5367" in namespace "projected-7956" to be "Succeeded or Failed"
May  8 12:12:26.406: INFO: Pod "pod-projected-configmaps-d266f5b1-0869-4ecc-a7b6-a6d7851c5367": Phase="Pending", Reason="", readiness=false. Elapsed: 14.694687ms
May  8 12:12:28.456: INFO: Pod "pod-projected-configmaps-d266f5b1-0869-4ecc-a7b6-a6d7851c5367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064127269s
May  8 12:12:30.486: INFO: Pod "pod-projected-configmaps-d266f5b1-0869-4ecc-a7b6-a6d7851c5367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094270254s
STEP: Saw pod success
May  8 12:12:30.486: INFO: Pod "pod-projected-configmaps-d266f5b1-0869-4ecc-a7b6-a6d7851c5367" satisfied condition "Succeeded or Failed"
May  8 12:12:30.498: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-d266f5b1-0869-4ecc-a7b6-a6d7851c5367 container projected-configmap-volume-test: 
STEP: delete the pod
May  8 12:12:30.541: INFO: Waiting for pod pod-projected-configmaps-d266f5b1-0869-4ecc-a7b6-a6d7851c5367 to disappear
May  8 12:12:30.574: INFO: Pod pod-projected-configmaps-d266f5b1-0869-4ecc-a7b6-a6d7851c5367 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:12:30.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7956" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4652,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:12:30.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  8 12:12:30.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac629456-27b6-4422-be18-bde71af9f199" in namespace "projected-1947" to be "Succeeded or Failed"
May  8 12:12:30.671: INFO: Pod "downwardapi-volume-ac629456-27b6-4422-be18-bde71af9f199": Phase="Pending", Reason="", readiness=false. Elapsed: 3.523427ms
May  8 12:12:32.779: INFO: Pod "downwardapi-volume-ac629456-27b6-4422-be18-bde71af9f199": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111056787s
May  8 12:12:34.783: INFO: Pod "downwardapi-volume-ac629456-27b6-4422-be18-bde71af9f199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115358944s
STEP: Saw pod success
May  8 12:12:34.783: INFO: Pod "downwardapi-volume-ac629456-27b6-4422-be18-bde71af9f199" satisfied condition "Succeeded or Failed"
May  8 12:12:34.786: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ac629456-27b6-4422-be18-bde71af9f199 container client-container: 
STEP: delete the pod
May  8 12:12:34.959: INFO: Waiting for pod downwardapi-volume-ac629456-27b6-4422-be18-bde71af9f199 to disappear
May  8 12:12:34.964: INFO: Pod downwardapi-volume-ac629456-27b6-4422-be18-bde71af9f199 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:12:34.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1947" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4665,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:12:34.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-1d4d990d-8dd8-40bc-ab16-05a97aafe4e0 in namespace container-probe-2867
May  8 12:12:39.136: INFO: Started pod liveness-1d4d990d-8dd8-40bc-ab16-05a97aafe4e0 in namespace container-probe-2867
STEP: checking the pod's current state and verifying that restartCount is present
May  8 12:12:39.139: INFO: Initial restart count of pod liveness-1d4d990d-8dd8-40bc-ab16-05a97aafe4e0 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:16:40.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2867" for this suite.

• [SLOW TEST:245.076 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4683,"failed":0}
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:16:40.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 12:16:40.384: INFO: Create a RollingUpdate DaemonSet
May  8 12:16:40.388: INFO: Check that daemon pods launch on every node of the cluster
May  8 12:16:40.406: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:16:40.442: INFO: Number of nodes with available pods: 0
May  8 12:16:40.442: INFO: Node kali-worker is running more than one daemon pod
May  8 12:16:41.447: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:16:41.451: INFO: Number of nodes with available pods: 0
May  8 12:16:41.451: INFO: Node kali-worker is running more than one daemon pod
May  8 12:16:42.447: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:16:42.450: INFO: Number of nodes with available pods: 0
May  8 12:16:42.450: INFO: Node kali-worker is running more than one daemon pod
May  8 12:16:43.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:16:43.540: INFO: Number of nodes with available pods: 0
May  8 12:16:43.540: INFO: Node kali-worker is running more than one daemon pod
May  8 12:16:44.447: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:16:44.450: INFO: Number of nodes with available pods: 0
May  8 12:16:44.450: INFO: Node kali-worker is running more than one daemon pod
May  8 12:16:45.451: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:16:45.455: INFO: Number of nodes with available pods: 2
May  8 12:16:45.455: INFO: Number of running nodes: 2, number of available pods: 2
May  8 12:16:45.455: INFO: Update the DaemonSet to trigger a rollout
May  8 12:16:45.463: INFO: Updating DaemonSet daemon-set
May  8 12:16:50.479: INFO: Roll back the DaemonSet before rollout is complete
May  8 12:16:50.486: INFO: Updating DaemonSet daemon-set
May  8 12:16:50.486: INFO: Make sure DaemonSet rollback is complete
May  8 12:16:50.523: INFO: Wrong image for pod: daemon-set-blz66. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  8 12:16:50.523: INFO: Pod daemon-set-blz66 is not available
May  8 12:16:50.571: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:16:51.826: INFO: Wrong image for pod: daemon-set-blz66. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  8 12:16:51.826: INFO: Pod daemon-set-blz66 is not available
May  8 12:16:51.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  8 12:16:52.576: INFO: Pod daemon-set-9lvgc is not available
May  8 12:16:52.581: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1077, will wait for the garbage collector to delete the pods
May  8 12:16:52.646: INFO: Deleting DaemonSet.extensions daemon-set took: 6.153921ms
May  8 12:16:52.946: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.183496ms
May  8 12:17:03.850: INFO: Number of nodes with available pods: 0
May  8 12:17:03.850: INFO: Number of running nodes: 0, number of available pods: 0
May  8 12:17:03.875: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1077/daemonsets","resourceVersion":"2589591"},"items":null}

May  8 12:17:03.878: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1077/pods","resourceVersion":"2589591"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:17:03.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1077" for this suite.

• [SLOW TEST:23.849 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":273,"skipped":4683,"failed":0}
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:17:03.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:17:03.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2270" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":274,"skipped":4683,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  8 12:17:03.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  8 12:17:04.056: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
May  8 12:17:04.079: INFO: Pod name sample-pod: Found 0 pods out of 1
May  8 12:17:09.540: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  8 12:17:09.540: INFO: Creating deployment "test-rolling-update-deployment"
May  8 12:17:09.727: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
May  8 12:17:10.039: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
May  8 12:17:12.047: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
May  8 12:17:12.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724537030, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724537030, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724537030, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724537030, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  8 12:17:14.054: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  8 12:17:14.063: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-48 /apis/apps/v1/namespaces/deployment-48/deployments/test-rolling-update-deployment 77281433-c91b-4f84-900b-35f198e6c675 2589703 1 2020-05-08 12:17:09 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-05-08 12:17:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-08 12:17:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041d24f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-08 12:17:10 +0000 UTC,LastTransitionTime:2020-05-08 12:17:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-05-08 12:17:13 +0000 UTC,LastTransitionTime:2020-05-08 12:17:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May  8 12:17:14.067: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-48 /apis/apps/v1/namespaces/deployment-48/replicasets/test-rolling-update-deployment-59d5cb45c7 ab1ae6fe-a132-4e47-a3c8-28dccfd09dd4 2589692 1 2020-05-08 12:17:09 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 77281433-c91b-4f84-900b-35f198e6c675 0xc004266487 0xc004266488}] []  [{kube-controller-manager Update apps/v1 2020-05-08 12:17:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 55 50 56 49 52 51 51 45 99 57 49 98 45 52 102 56 52 45 57 48 48 98 45 51 53 102 49 57 56 101 54 99 54 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004266518  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  8 12:17:14.067: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
May  8 12:17:14.067: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-48 /apis/apps/v1/namespaces/deployment-48/replicasets/test-rolling-update-controller 9574f4fe-a3b8-4ee2-90f7-e4c9e7ae3227 2589701 2 2020-05-08 12:17:04 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 77281433-c91b-4f84-900b-35f198e6c675 0xc00426636f 0xc004266380}] []  [{e2e.test Update apps/v1 2020-05-08 12:17:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-08 12:17:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 55 50 56 49 52 51 51 45 99 57 49 98 45 52 102 56 52 45 57 48 48 98 45 51 53 102 49 57 56 101 54 99 54 55 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004266418  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  8 12:17:14.070: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-kbcxs" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-kbcxs test-rolling-update-deployment-59d5cb45c7- deployment-48 /api/v1/namespaces/deployment-48/pods/test-rolling-update-deployment-59d5cb45c7-kbcxs d6b6b14a-d7ba-4375-9517-5b4b958e47c8 2589691 0 2020-05-08 12:17:10 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 ab1ae6fe-a132-4e47-a3c8-28dccfd09dd4 0xc0042669d7 0xc0042669d8}] []  [{kube-controller-manager Update v1 2020-05-08 12:17:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 98 49 97 101 54 102 101 45 97 49 51 50 45 52 101 52 55 45 97 51 99 56 45 50 56 100 99 99 102 100 48 57 100 100 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-08 12:17:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 53 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j8d4n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j8d4n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j8d4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 12:17:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 12:17:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 12:17:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-08 12:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.59,StartTime:2020-05-08 12:17:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-08 12:17:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://8fc62b1023df5c85c1f76044e51bc60750a27d7120b4fe9f460b68b3f970cd25,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  8 12:17:14.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-48" for this suite.

• [SLOW TEST:10.115 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":275,"skipped":4713,"failed":0}
SSSSMay  8 12:17:14.078: INFO: Running AfterSuite actions on all nodes
May  8 12:17:14.078: INFO: Running AfterSuite actions on node 1
May  8 12:17:14.078: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5176.914 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS