I0111 16:06:50.154748 10 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0111 16:06:50.159200 10 e2e.go:129] Starting e2e run "477e40b0-d99a-437b-90ad-78bfdbdf6d1f" on Ginkgo node 1 {"msg":"Test Suite starting","total":309,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1610381193 - Will randomize all specs Will run 309 of 5667 specs Jan 11 16:06:50.829: INFO: >>> kubeConfig: /root/.kube/config Jan 11 16:06:50.881: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 11 16:06:51.283: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 16:06:51.640: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 16:06:51.640: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 11 16:06:51.640: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 11 16:06:51.684: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 11 16:06:51.684: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 11 16:06:51.684: INFO: e2e test version: v1.20.1 Jan 11 16:06:51.688: INFO: kube-apiserver version: v1.20.0 Jan 11 16:06:51.689: INFO: >>> kubeConfig: /root/.kube/config Jan 11 16:06:51.706: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:06:51.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Jan 11 16:06:53.931: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 16:07:02.389: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 16:07:05.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978022, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978022, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978022, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978022, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 16:07:07.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978022, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978022, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978022, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978022, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 16:07:10.181: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:07:10.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7361" for this suite. STEP: Destroying namespace "webhook-7361-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:19.140 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":309,"completed":1,"skipped":15,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:07:10.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 16:07:16.015: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 16:07:18.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978036, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978036, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978036, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978035, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 16:07:21.084: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:07:21.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8726" for this suite. STEP: Destroying namespace "webhook-8726-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.671 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":309,"completed":2,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:07:21.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-890fcd0c-f9b8-46f4-b6e6-0d0bcc4904a5 STEP: Creating a pod to test consume configMaps Jan 11 16:07:21.686: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e" in namespace "projected-5930" to be "Succeeded or Failed" Jan 11 16:07:21.735: INFO: Pod "pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e": Phase="Pending", Reason="", readiness=false. Elapsed: 48.156481ms Jan 11 16:07:24.756: INFO: Pod "pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.069786798s Jan 11 16:07:26.877: INFO: Pod "pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.190291906s Jan 11 16:07:28.889: INFO: Pod "pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.202619799s STEP: Saw pod success Jan 11 16:07:28.890: INFO: Pod "pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e" satisfied condition "Succeeded or Failed" Jan 11 16:07:28.896: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e container agnhost-container: STEP: delete the pod Jan 11 16:07:29.045: INFO: Waiting for pod pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e to disappear Jan 11 16:07:29.067: INFO: Pod pod-projected-configmaps-929ae42d-9813-40fd-bfc9-542a5a1fd46e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:07:29.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5930" for this suite. • [SLOW TEST:7.556 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":3,"skipped":43,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:07:29.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-1a5bd4d3-c03d-44e2-b07a-25dab742df7b STEP: Creating a pod to test consume secrets Jan 11 16:07:29.253: INFO: Waiting up to 5m0s for pod "pod-secrets-98ad9bd6-2817-4fb4-8e29-5e8f385fdafc" in namespace "secrets-7595" to be "Succeeded or Failed" Jan 11 16:07:29.259: INFO: Pod "pod-secrets-98ad9bd6-2817-4fb4-8e29-5e8f385fdafc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.56305ms Jan 11 16:07:31.313: INFO: Pod "pod-secrets-98ad9bd6-2817-4fb4-8e29-5e8f385fdafc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059990349s Jan 11 16:07:33.331: INFO: Pod "pod-secrets-98ad9bd6-2817-4fb4-8e29-5e8f385fdafc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077314227s STEP: Saw pod success Jan 11 16:07:33.331: INFO: Pod "pod-secrets-98ad9bd6-2817-4fb4-8e29-5e8f385fdafc" satisfied condition "Succeeded or Failed" Jan 11 16:07:33.336: INFO: Trying to get logs from node leguer-worker pod pod-secrets-98ad9bd6-2817-4fb4-8e29-5e8f385fdafc container secret-volume-test: STEP: delete the pod Jan 11 16:07:33.533: INFO: Waiting for pod pod-secrets-98ad9bd6-2817-4fb4-8e29-5e8f385fdafc to disappear Jan 11 16:07:33.567: INFO: Pod pod-secrets-98ad9bd6-2817-4fb4-8e29-5e8f385fdafc no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:07:33.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7595" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":309,"completed":4,"skipped":70,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:07:33.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-45b7f8ab-3c85-4e1d-aa09-85f757eae21c STEP: Creating secret with name s-test-opt-upd-dc047c19-3790-4fc1-a6a4-dbeced53794e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-45b7f8ab-3c85-4e1d-aa09-85f757eae21c STEP: Updating secret s-test-opt-upd-dc047c19-3790-4fc1-a6a4-dbeced53794e STEP: Creating secret with name s-test-opt-create-d3ada629-6121-481b-8839-431ee580dc4a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:08:59.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-684" for this suite. • [SLOW TEST:85.733 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":5,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:08:59.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 11 16:09:09.523: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 16:09:09.532: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 16:09:11.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 16:09:12.513: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 16:09:13.532: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 16:09:13.542: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 16:09:15.533: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 16:09:15.540: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 16:09:17.533: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 16:09:17.542: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 16:09:19.533: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 16:09:19.542: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 16:09:21.533: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 16:09:21.541: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:09:21.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8043" for this suite. • [SLOW TEST:22.233 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":309,"completed":6,"skipped":151,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:09:21.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:09:21.648: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:09:22.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-174" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":309,"completed":7,"skipped":159,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:09:22.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:09:34.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8958" for this suite. • [SLOW TEST:11.449 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":309,"completed":8,"skipped":165,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:09:34.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-7099 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 16:09:34.294: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 16:09:34.423: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 16:09:36.435: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 16:09:39.137: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 16:09:40.432: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:09:42.436: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:09:44.431: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:09:46.431: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:09:48.433: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:09:50.431: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:09:52.432: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:09:54.433: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:09:56.433: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 11 16:09:56.443: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 11 16:10:00.484: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 11 16:10:00.485: INFO: Breadth first check of 10.244.2.176 on host 172.18.0.13... Jan 11 16:10:00.491: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.182:9080/dial?request=hostname&protocol=udp&host=10.244.2.176&port=8081&tries=1'] Namespace:pod-network-test-7099 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:10:00.491: INFO: >>> kubeConfig: /root/.kube/config I0111 16:10:00.624048 10 log.go:181] (0x8d4aa10) (0x8d4aa80) Create stream I0111 16:10:00.624794 10 log.go:181] (0x8d4aa10) (0x8d4aa80) Stream added, broadcasting: 1 I0111 16:10:00.643749 10 log.go:181] (0x8d4aa10) Reply frame received for 1 I0111 16:10:00.644178 10 log.go:181] (0x8d4aa10) (0x8b3da40) Create stream I0111 16:10:00.644266 10 log.go:181] (0x8d4aa10) (0x8b3da40) Stream added, broadcasting: 3 I0111 16:10:00.646589 10 log.go:181] (0x8d4aa10) Reply frame received for 3 I0111 16:10:00.647183 10 log.go:181] (0x8d4aa10) (0x851f1f0) Create stream I0111 16:10:00.647326 10 log.go:181] (0x8d4aa10) (0x851f1f0) Stream added, broadcasting: 5 I0111 16:10:00.649119 10 log.go:181] (0x8d4aa10) Reply frame received for 5 I0111 16:10:00.816662 10 log.go:181] (0x8d4aa10) Data frame received for 3 I0111 16:10:00.817130 10 log.go:181] (0x8b3da40) (3) Data frame handling I0111 16:10:00.817441 10 log.go:181] (0x8d4aa10) Data frame received for 5 I0111 16:10:00.817666 10 log.go:181] (0x851f1f0) (5) Data frame handling I0111 16:10:00.818017 10 log.go:181] (0x8b3da40) (3) Data frame sent I0111 16:10:00.818482 10 log.go:181] (0x8d4aa10) Data frame received for 1 I0111 16:10:00.818619 10 log.go:181] (0x8d4aa10) Data frame received for 3 I0111 16:10:00.818811 10 log.go:181] (0x8b3da40) (3) Data frame handling I0111 16:10:00.819024 10 log.go:181] (0x8d4aa80) (1) Data frame handling I0111 16:10:00.819340 10 log.go:181] (0x8d4aa80) (1) Data frame sent I0111 16:10:00.820272 10 log.go:181] (0x8d4aa10) (0x8d4aa80) Stream removed, broadcasting: 1 I0111 16:10:00.821933 10 log.go:181] (0x8d4aa10) Go away received I0111 16:10:00.824286 10 log.go:181] (0x8d4aa10) (0x8d4aa80) Stream removed, broadcasting: 1 I0111 16:10:00.824552 10 log.go:181] (0x8d4aa10) (0x8b3da40) Stream removed, broadcasting: 3 I0111 16:10:00.824763 10 log.go:181] (0x8d4aa10) (0x851f1f0) Stream removed, broadcasting: 5 Jan 11 16:10:00.825: INFO: Waiting for responses: map[] Jan 11 16:10:00.827: INFO: reached 10.244.2.176 after 0/1 tries Jan 11 16:10:00.827: INFO: Breadth first check of 10.244.1.181 on host 172.18.0.12... Jan 11 16:10:00.833: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.182:9080/dial?request=hostname&protocol=udp&host=10.244.1.181&port=8081&tries=1'] Namespace:pod-network-test-7099 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:10:00.833: INFO: >>> kubeConfig: /root/.kube/config I0111 16:10:00.941399 10 log.go:181] (0x89428c0) (0x89429a0) Create stream I0111 16:10:00.941531 10 log.go:181] (0x89428c0) (0x89429a0) Stream added, broadcasting: 1 I0111 16:10:00.946273 10 log.go:181] (0x89428c0) Reply frame received for 1 I0111 16:10:00.946571 10 log.go:181] (0x89428c0) (0x8e707e0) Create stream I0111 16:10:00.946714 10 log.go:181] (0x89428c0) (0x8e707e0) Stream added, broadcasting: 3 I0111 16:10:00.948580 10 log.go:181] (0x89428c0) Reply frame received for 3 I0111 16:10:00.948941 10 log.go:181] (0x89428c0) (0x8942e70) Create stream I0111 16:10:00.949071 10 log.go:181] (0x89428c0) (0x8942e70) Stream added, broadcasting: 5 I0111 16:10:00.950587 10 log.go:181] (0x89428c0) Reply frame received for 5 I0111 16:10:01.014161 10 log.go:181] (0x89428c0) Data frame received for 3 I0111 16:10:01.014359 10 log.go:181] (0x8e707e0) (3) Data frame handling I0111 16:10:01.014506 10 log.go:181] (0x8e707e0) (3) Data frame sent I0111 16:10:01.014633 10 log.go:181] (0x89428c0) Data frame received for 3 I0111 16:10:01.014731 10 log.go:181] (0x8e707e0) (3) Data frame handling I0111 16:10:01.015582 10 log.go:181] (0x89428c0) Data frame received for 5 I0111 16:10:01.015824 10 log.go:181] (0x8942e70) (5) Data frame handling I0111 16:10:01.021814 10 log.go:181] (0x89428c0) Data frame received for 1 I0111 16:10:01.021924 10 log.go:181] (0x89429a0) (1) Data frame handling I0111 16:10:01.022027 10 log.go:181] (0x89429a0) (1) Data frame sent I0111 16:10:01.022132 10 log.go:181] (0x89428c0) (0x89429a0) Stream removed, broadcasting: 1 I0111 16:10:01.022281 10 log.go:181] (0x89428c0) Go away received I0111 16:10:01.022655 10 log.go:181] (0x89428c0) (0x89429a0) Stream removed, broadcasting: 1 I0111 16:10:01.022766 10 log.go:181] (0x89428c0) (0x8e707e0) Stream removed, broadcasting: 3 I0111 16:10:01.022844 10 log.go:181] (0x89428c0) (0x8942e70) Stream removed, broadcasting: 5 Jan 11 16:10:01.023: INFO: Waiting for responses: map[] Jan 11 16:10:01.023: INFO: reached 10.244.1.181 after 0/1 tries Jan 11 16:10:01.023: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:10:01.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7099" for this suite. • [SLOW TEST:26.820 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":309,"completed":9,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:10:01.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:10:01.127: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:10:08.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4493" for this suite. • [SLOW TEST:7.320 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":309,"completed":10,"skipped":186,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:10:08.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1300.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1300.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1300.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1300.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1300.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1300.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1300.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1300.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1300.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1300.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 29.99.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.99.29_udp@PTR;check="$$(dig +tcp +noall +answer +search 29.99.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.99.29_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1300.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1300.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1300.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1300.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1300.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1300.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1300.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1300.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1300.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1300.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1300.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 29.99.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.99.29_udp@PTR;check="$$(dig +tcp +noall +answer +search 29.99.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.99.29_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 16:10:14.771: INFO: Unable to read wheezy_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:14.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:14.781: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:14.784: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:14.808: INFO: Unable to read jessie_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:14.812: INFO: Unable to read jessie_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:14.816: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:14.820: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:14.842: INFO: Lookups using dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec failed for: [wheezy_udp@dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_udp@dns-test-service.dns-1300.svc.cluster.local jessie_tcp@dns-test-service.dns-1300.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local] Jan 11 16:10:19.858: INFO: Unable to read wheezy_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:19.892: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:19.903: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:19.907: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:19.934: INFO: Unable to read jessie_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:19.939: INFO: Unable to read jessie_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:19.943: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:19.947: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:20.008: INFO: Lookups using dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec failed for: [wheezy_udp@dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_udp@dns-test-service.dns-1300.svc.cluster.local jessie_tcp@dns-test-service.dns-1300.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local] Jan 11 16:10:24.849: INFO: Unable to read wheezy_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:24.853: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:24.859: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:24.862: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:24.887: INFO: Unable to read jessie_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:24.891: INFO: Unable to read jessie_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:24.896: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:24.900: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:24.927: INFO: Lookups using dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec failed for: [wheezy_udp@dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_udp@dns-test-service.dns-1300.svc.cluster.local jessie_tcp@dns-test-service.dns-1300.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local] Jan 11 16:10:29.851: INFO: Unable to read wheezy_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:29.856: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:29.860: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:29.864: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:29.899: INFO: Unable to read jessie_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:29.917: INFO: Unable to read jessie_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:29.921: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:29.925: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:29.951: INFO: Lookups using dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec failed for: [wheezy_udp@dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_udp@dns-test-service.dns-1300.svc.cluster.local jessie_tcp@dns-test-service.dns-1300.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local] Jan 11 16:10:34.849: INFO: Unable to read wheezy_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:34.855: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:34.858: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:34.862: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:34.892: INFO: Unable to read jessie_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:34.897: INFO: Unable to read jessie_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:34.901: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:34.906: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:34.932: INFO: Lookups using dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec failed for: [wheezy_udp@dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_udp@dns-test-service.dns-1300.svc.cluster.local jessie_tcp@dns-test-service.dns-1300.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local] Jan 11 16:10:39.849: INFO: Unable to read wheezy_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:39.854: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:39.859: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:39.863: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:39.893: INFO: Unable to read jessie_udp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:39.897: INFO: Unable to read jessie_tcp@dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:39.901: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:39.904: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local from pod dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec: the server could not find the requested resource (get pods dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec) Jan 11 16:10:39.930: INFO: Lookups using dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec failed for: [wheezy_udp@dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@dns-test-service.dns-1300.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_udp@dns-test-service.dns-1300.svc.cluster.local jessie_tcp@dns-test-service.dns-1300.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1300.svc.cluster.local] Jan 11 16:10:44.941: INFO: DNS probes using dns-1300/dns-test-8d0da766-2630-4cb5-8cdf-7fb4f571e8ec succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:10:47.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1300" for this suite. • [SLOW TEST:39.796 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":309,"completed":11,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:10:48.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-projected-all-test-volume-153db665-06d8-4a30-b3d6-43aa62d582ec STEP: Creating secret with name secret-projected-all-test-volume-47a31b58-fdc7-4d70-9829-448a4199ea26 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 11 16:10:48.363: INFO: Waiting up to 5m0s for pod "projected-volume-3e1804c0-540b-4aae-b867-ebcec479c3e2" in namespace "projected-9614" to be "Succeeded or Failed" Jan 11 16:10:48.378: INFO: Pod "projected-volume-3e1804c0-540b-4aae-b867-ebcec479c3e2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.379042ms Jan 11 16:10:50.387: INFO: Pod "projected-volume-3e1804c0-540b-4aae-b867-ebcec479c3e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023250748s Jan 11 16:10:52.395: INFO: Pod "projected-volume-3e1804c0-540b-4aae-b867-ebcec479c3e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031504427s STEP: Saw pod success Jan 11 16:10:52.396: INFO: Pod "projected-volume-3e1804c0-540b-4aae-b867-ebcec479c3e2" satisfied condition "Succeeded or Failed" Jan 11 16:10:52.400: INFO: Trying to get logs from node leguer-worker pod projected-volume-3e1804c0-540b-4aae-b867-ebcec479c3e2 container projected-all-volume-test: STEP: delete the pod Jan 11 16:10:52.483: INFO: Waiting for pod projected-volume-3e1804c0-540b-4aae-b867-ebcec479c3e2 to disappear Jan 11 16:10:52.490: INFO: Pod projected-volume-3e1804c0-540b-4aae-b867-ebcec479c3e2 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:10:52.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9614" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":309,"completed":12,"skipped":220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:10:52.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 11 16:10:52.627: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 16:10:52.646: INFO: Waiting for terminating namespaces to be deleted... Jan 11 16:10:52.655: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 11 16:10:52.682: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.682: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 16:10:52.683: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 16:10:52.683: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 16:10:52.683: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 16:10:52.683: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 16:10:52.683: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 16:10:52.683: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container chaos-mesh ready: true, restart count 0 Jan 11 16:10:52.683: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 16:10:52.683: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 16:10:52.683: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.683: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 16:10:52.683: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 11 16:10:52.701: INFO: rally-58b335f6-d37a582z-q9kxt from c-rally-58b335f6-cg4ube8l started at 2021-01-11 16:10:27 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.701: INFO: Container rally-58b335f6-d37a582z ready: false, restart count 0 Jan 11 16:10:52.701: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 16:10:52.702: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 16:10:52.702: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 16:10:52.702: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 16:10:52.702: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 16:10:52.702: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 16:10:52.702: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 16:10:52.702: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 16:10:52.702: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 11 16:10:52.702: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3b72779e-417f-4a92-ac40-26c1f67bcc2e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.13 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-3b72779e-417f-4a92-ac40-26c1f67bcc2e off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3b72779e-417f-4a92-ac40-26c1f67bcc2e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:16:01.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-494" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:308.593 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":309,"completed":13,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:16:01.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:16:01.243: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 11 16:16:01.263: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:01.280: INFO: Number of nodes with available pods: 0 Jan 11 16:16:01.280: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:16:02.293: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:02.302: INFO: Number of nodes with available pods: 0 Jan 11 16:16:02.302: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:16:03.446: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:03.453: INFO: Number of nodes with available pods: 0 Jan 11 16:16:03.453: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:16:04.293: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:04.300: INFO: Number of nodes with available pods: 0 Jan 11 16:16:04.300: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:16:05.292: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:05.298: INFO: Number of nodes with available pods: 0 Jan 11 16:16:05.298: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:16:06.299: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:06.310: INFO: Number of nodes with available pods: 1 Jan 11 16:16:06.310: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:16:07.290: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:07.298: INFO: Number of nodes with available pods: 2 Jan 11 16:16:07.298: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 11 16:16:07.554: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:07.555: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:07.607: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:08.617: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:08.618: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:08.627: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:09.618: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:09.618: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:09.618: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:09.631: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:10.618: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:10.618: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:10.618: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:10.629: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:11.618: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:11.618: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:11.618: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:11.629: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:12.617: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:12.617: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:12.617: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:12.629: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:13.617: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:13.617: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:13.617: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:13.626: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:14.617: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:14.617: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:14.617: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:14.624: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:15.618: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:15.618: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:15.618: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:15.628: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:16.646: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:16.646: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:16.646: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:16.655: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:17.615: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:17.615: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:17.615: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:17.624: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:18.619: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:18.619: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:18.619: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:18.631: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:19.617: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:19.617: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:19.617: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:19.627: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:20.643: INFO: Wrong image for pod: daemon-set-bptd7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:20.643: INFO: Pod daemon-set-bptd7 is not available Jan 11 16:16:20.643: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:20.672: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:21.616: INFO: Pod daemon-set-bv7b9 is not available Jan 11 16:16:21.616: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:21.626: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:22.618: INFO: Pod daemon-set-bv7b9 is not available Jan 11 16:16:22.618: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:22.630: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:23.617: INFO: Pod daemon-set-bv7b9 is not available Jan 11 16:16:23.617: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:23.628: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:24.617: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:24.625: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:25.617: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:25.617: INFO: Pod daemon-set-hm66h is not available Jan 11 16:16:25.627: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:26.618: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:26.618: INFO: Pod daemon-set-hm66h is not available Jan 11 16:16:26.627: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:27.616: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:27.616: INFO: Pod daemon-set-hm66h is not available Jan 11 16:16:27.623: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:28.638: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:28.638: INFO: Pod daemon-set-hm66h is not available Jan 11 16:16:28.673: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:29.616: INFO: Wrong image for pod: daemon-set-hm66h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 16:16:29.616: INFO: Pod daemon-set-hm66h is not available Jan 11 16:16:29.661: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:30.617: INFO: Pod daemon-set-mn7dg is not available Jan 11 16:16:30.627: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jan 11 16:16:30.636: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:30.642: INFO: Number of nodes with available pods: 1 Jan 11 16:16:30.643: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:16:31.752: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:31.760: INFO: Number of nodes with available pods: 1 Jan 11 16:16:31.760: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:16:32.657: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:32.663: INFO: Number of nodes with available pods: 1 Jan 11 16:16:32.663: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:16:33.653: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:33.660: INFO: Number of nodes with available pods: 1 Jan 11 16:16:33.660: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:16:34.654: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:16:34.660: INFO: Number of nodes with available pods: 2 Jan 11 16:16:34.661: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6897, will wait for the garbage collector to delete the pods Jan 11 16:16:34.756: INFO: Deleting DaemonSet.extensions daemon-set took: 7.257238ms Jan 11 16:16:35.358: INFO: Terminating DaemonSet.extensions daemon-set pods took: 602.194159ms Jan 11 16:16:40.164: INFO: Number of nodes with available pods: 0 Jan 11 16:16:40.164: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 16:16:40.173: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"186367"},"items":null} Jan 11 16:16:40.179: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"186367"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:16:40.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6897" for this suite. • [SLOW TEST:39.110 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":309,"completed":14,"skipped":273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:16:40.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:16:40.581: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"19961941-31c5-4dc1-b07f-83dbb4d1a443", Controller:(*bool)(0x86c60f2), BlockOwnerDeletion:(*bool)(0x86c60f3)}} Jan 11 16:16:40.611: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"63b755e6-7482-4e6e-9a8f-062a282122d0", Controller:(*bool)(0x82c041a), BlockOwnerDeletion:(*bool)(0x82c041b)}} Jan 11 16:16:40.636: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c05be08d-279e-4050-8cee-736178399b76", Controller:(*bool)(0x82c060a), BlockOwnerDeletion:(*bool)(0x82c060b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:16:45.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5763" for this suite. • [SLOW TEST:5.463 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":309,"completed":15,"skipped":322,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:16:45.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating secret secrets-5795/secret-test-1b7317a0-eda9-481b-b519-ef04eed70044 STEP: Creating a pod to test consume secrets Jan 11 16:16:45.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1" in namespace "secrets-5795" to be "Succeeded or Failed" Jan 11 16:16:45.849: INFO: Pod "pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.887864ms Jan 11 16:16:47.856: INFO: Pod "pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022390688s Jan 11 16:16:49.866: INFO: Pod "pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032607409s Jan 11 16:16:51.875: INFO: Pod "pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04143951s STEP: Saw pod success Jan 11 16:16:51.875: INFO: Pod "pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1" satisfied condition "Succeeded or Failed" Jan 11 16:16:51.881: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1 container env-test: STEP: delete the pod Jan 11 16:16:51.952: INFO: Waiting for pod pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1 to disappear Jan 11 16:16:51.962: INFO: Pod pod-configmaps-eba76c4b-6f9e-4eca-aa28-3693e1b86ed1 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:16:51.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5795" for this suite. • [SLOW TEST:6.294 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":309,"completed":16,"skipped":329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:16:51.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:16:56.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7964" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":309,"completed":17,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:16:56.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0111 16:17:08.960458 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 16:18:11.148: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 11 16:18:11.148: INFO: Deleting pod "simpletest-rc-to-be-deleted-4hvqz" in namespace "gc-7529" Jan 11 16:18:11.186: INFO: Deleting pod "simpletest-rc-to-be-deleted-7j9sd" in namespace "gc-7529" Jan 11 16:18:11.222: INFO: Deleting pod "simpletest-rc-to-be-deleted-c7jtk" in namespace "gc-7529" Jan 11 16:18:11.287: INFO: Deleting pod "simpletest-rc-to-be-deleted-ft9rt" in namespace "gc-7529" Jan 11 16:18:11.335: INFO: Deleting pod "simpletest-rc-to-be-deleted-m9f42" in namespace "gc-7529" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:18:11.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7529" for this suite. • [SLOW TEST:75.964 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":309,"completed":18,"skipped":429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:18:12.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9367 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9367 STEP: Creating statefulset with conflicting port in namespace statefulset-9367 STEP: Waiting until pod test-pod will start running in namespace statefulset-9367 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9367 Jan 11 16:18:18.655: INFO: Observed stateful pod in namespace: statefulset-9367, name: ss-0, uid: df879535-2b4f-44e1-bd60-533aca88232a, status phase: Pending. Waiting for statefulset controller to delete. Jan 11 16:18:18.814: INFO: Observed stateful pod in namespace: statefulset-9367, name: ss-0, uid: df879535-2b4f-44e1-bd60-533aca88232a, status phase: Failed. Waiting for statefulset controller to delete. Jan 11 16:18:18.820: INFO: Observed stateful pod in namespace: statefulset-9367, name: ss-0, uid: df879535-2b4f-44e1-bd60-533aca88232a, status phase: Failed. Waiting for statefulset controller to delete. Jan 11 16:18:18.841: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9367 STEP: Removing pod with conflicting port in namespace statefulset-9367 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9367 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 11 16:18:22.979: INFO: Deleting all statefulset in ns statefulset-9367 Jan 11 16:18:22.986: INFO: Scaling statefulset ss to 0 Jan 11 16:19:23.041: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 16:19:23.046: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:19:23.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9367" for this suite. • [SLOW TEST:70.997 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":309,"completed":19,"skipped":474,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:19:23.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:19:27.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3998" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":309,"completed":20,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:19:27.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 16:19:38.663: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 16:19:40.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978778, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978778, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978778, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745978778, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 16:19:43.716: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:19:43.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6142-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:19:44.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1423" for this suite. STEP: Destroying namespace "webhook-1423-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:17.402 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":309,"completed":21,"skipped":493,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:19:45.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5180 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5180 STEP: creating replication controller externalsvc in namespace services-5180 I0111 16:19:45.825149 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5180, replica count: 2 I0111 16:19:48.878526 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:19:51.880986 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 11 16:19:51.906: INFO: Creating new exec pod Jan 11 16:19:55.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5180 exec execpoddgl9x -- /bin/sh -x -c nslookup clusterip-service.services-5180.svc.cluster.local' Jan 11 16:20:00.833: INFO: stderr: "I0111 16:20:00.664561 34 log.go:181] (0x2533030) (0x25330a0) Create stream\nI0111 16:20:00.667466 34 log.go:181] (0x2533030) (0x25330a0) Stream added, broadcasting: 1\nI0111 16:20:00.680350 34 log.go:181] (0x2533030) Reply frame received for 1\nI0111 16:20:00.681131 34 log.go:181] (0x2533030) (0x279c230) Create stream\nI0111 16:20:00.681235 34 log.go:181] (0x2533030) (0x279c230) Stream added, broadcasting: 3\nI0111 16:20:00.682687 34 log.go:181] (0x2533030) Reply frame received for 3\nI0111 16:20:00.682956 34 log.go:181] (0x2533030) (0x2533260) Create stream\nI0111 16:20:00.683044 34 log.go:181] (0x2533030) (0x2533260) Stream added, broadcasting: 5\nI0111 16:20:00.684784 34 log.go:181] (0x2533030) Reply frame received for 5\nI0111 16:20:00.768293 34 log.go:181] (0x2533030) Data frame received for 5\nI0111 16:20:00.768647 34 log.go:181] (0x2533260) (5) Data frame handling\nI0111 16:20:00.769387 34 log.go:181] (0x2533260) (5) Data frame sent\n+ nslookup clusterip-service.services-5180.svc.cluster.local\nI0111 16:20:00.807538 34 log.go:181] (0x2533030) Data frame received for 3\nI0111 16:20:00.807662 34 log.go:181] (0x279c230) (3) Data frame handling\nI0111 16:20:00.807742 34 log.go:181] (0x279c230) (3) Data frame sent\nI0111 16:20:00.808756 34 log.go:181] (0x2533030) Data frame received for 3\nI0111 16:20:00.808923 34 log.go:181] (0x279c230) (3) Data frame handling\nI0111 16:20:00.809028 34 log.go:181] (0x279c230) (3) Data frame sent\nI0111 16:20:00.809930 34 log.go:181] (0x2533030) Data frame received for 3\nI0111 16:20:00.810013 34 log.go:181] (0x279c230) (3) Data frame handling\nI0111 16:20:00.810423 34 log.go:181] (0x2533030) Data frame received for 5\nI0111 16:20:00.810719 34 log.go:181] (0x2533260) (5) Data frame handling\nI0111 16:20:00.812382 34 log.go:181] (0x2533030) Data frame received for 1\nI0111 16:20:00.812475 34 log.go:181] (0x25330a0) (1) Data frame handling\nI0111 16:20:00.812573 34 log.go:181] (0x25330a0) (1) Data frame sent\nI0111 16:20:00.814006 34 log.go:181] (0x2533030) (0x25330a0) Stream removed, broadcasting: 1\nI0111 16:20:00.815753 34 log.go:181] (0x2533030) Go away received\nI0111 16:20:00.820562 34 log.go:181] (0x2533030) (0x25330a0) Stream removed, broadcasting: 1\nI0111 16:20:00.821071 34 log.go:181] (0x2533030) (0x279c230) Stream removed, broadcasting: 3\nI0111 16:20:00.821408 34 log.go:181] (0x2533030) (0x2533260) Stream removed, broadcasting: 5\n" Jan 11 16:20:00.834: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5180.svc.cluster.local\tcanonical name = externalsvc.services-5180.svc.cluster.local.\nName:\texternalsvc.services-5180.svc.cluster.local\nAddress: 10.96.89.228\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5180, will wait for the garbage collector to delete the pods Jan 11 16:20:00.904: INFO: Deleting ReplicationController externalsvc took: 10.349658ms Jan 11 16:20:01.505: INFO: Terminating ReplicationController externalsvc pods took: 601.289775ms Jan 11 16:20:10.164: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:20:10.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5180" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:25.246 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":309,"completed":22,"skipped":514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:20:10.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-08cd9935-ca0e-46d6-9cad-288a5152b70b STEP: Creating a pod to test consume configMaps Jan 11 16:20:10.444: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c8ba4b80-8fea-494b-bf1a-67b473643bb4" in namespace "projected-4179" to be "Succeeded or Failed" Jan 11 16:20:10.458: INFO: Pod "pod-projected-configmaps-c8ba4b80-8fea-494b-bf1a-67b473643bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.120625ms Jan 11 16:20:12.466: INFO: Pod "pod-projected-configmaps-c8ba4b80-8fea-494b-bf1a-67b473643bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021900459s Jan 11 16:20:14.475: INFO: Pod "pod-projected-configmaps-c8ba4b80-8fea-494b-bf1a-67b473643bb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030524023s STEP: Saw pod success Jan 11 16:20:14.475: INFO: Pod "pod-projected-configmaps-c8ba4b80-8fea-494b-bf1a-67b473643bb4" satisfied condition "Succeeded or Failed" Jan 11 16:20:14.481: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-c8ba4b80-8fea-494b-bf1a-67b473643bb4 container agnhost-container: STEP: delete the pod Jan 11 16:20:14.533: INFO: Waiting for pod pod-projected-configmaps-c8ba4b80-8fea-494b-bf1a-67b473643bb4 to disappear Jan 11 16:20:14.546: INFO: Pod pod-projected-configmaps-c8ba4b80-8fea-494b-bf1a-67b473643bb4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:20:14.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4179" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":309,"completed":23,"skipped":545,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:20:14.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:20:14.709: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:20:18.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2405" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":309,"completed":24,"skipped":566,"failed":0} S ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:20:19.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 11 16:20:19.194: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jan 11 16:20:19.202: INFO: starting watch STEP: patching STEP: updating Jan 11 16:20:19.224: INFO: waiting for watch events with expected annotations Jan 11 16:20:19.225: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:20:19.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-7164" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":309,"completed":25,"skipped":567,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:20:19.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:20:30.241: INFO: Checking APIGroup: apiregistration.k8s.io Jan 11 16:20:30.244: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 11 16:20:30.244: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.245: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 11 16:20:30.245: INFO: Checking APIGroup: apps Jan 11 16:20:30.247: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 11 16:20:30.247: INFO: Versions found [{apps/v1 v1}] Jan 11 16:20:30.247: INFO: apps/v1 matches apps/v1 Jan 11 16:20:30.247: INFO: Checking APIGroup: events.k8s.io Jan 11 16:20:30.249: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 11 16:20:30.249: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.249: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 11 16:20:30.249: INFO: Checking APIGroup: authentication.k8s.io Jan 11 16:20:30.251: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 11 16:20:30.251: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.251: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 11 16:20:30.252: INFO: Checking APIGroup: authorization.k8s.io Jan 11 16:20:30.254: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 11 16:20:30.254: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.254: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 11 16:20:30.254: INFO: Checking APIGroup: autoscaling Jan 11 16:20:30.256: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jan 11 16:20:30.256: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 11 16:20:30.256: INFO: autoscaling/v1 matches autoscaling/v1 Jan 11 16:20:30.256: INFO: Checking APIGroup: batch Jan 11 16:20:30.258: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 11 16:20:30.258: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 11 16:20:30.258: INFO: batch/v1 matches batch/v1 Jan 11 16:20:30.258: INFO: Checking APIGroup: certificates.k8s.io Jan 11 16:20:30.260: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 11 16:20:30.260: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.260: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 11 16:20:30.260: INFO: Checking APIGroup: networking.k8s.io Jan 11 16:20:30.263: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 11 16:20:30.263: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.263: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 11 16:20:30.263: INFO: Checking APIGroup: extensions Jan 11 16:20:30.264: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jan 11 16:20:30.264: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jan 11 16:20:30.264: INFO: extensions/v1beta1 matches extensions/v1beta1 Jan 11 16:20:30.264: INFO: Checking APIGroup: policy Jan 11 16:20:30.266: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Jan 11 16:20:30.266: INFO: Versions found [{policy/v1beta1 v1beta1}] Jan 11 16:20:30.266: INFO: policy/v1beta1 matches policy/v1beta1 Jan 11 16:20:30.266: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 11 16:20:30.267: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 11 16:20:30.267: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.267: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 11 16:20:30.268: INFO: Checking APIGroup: storage.k8s.io Jan 11 16:20:30.269: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 11 16:20:30.269: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.269: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 11 16:20:30.269: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 11 16:20:30.271: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 11 16:20:30.271: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.271: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 11 16:20:30.271: INFO: Checking APIGroup: apiextensions.k8s.io Jan 11 16:20:30.272: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 11 16:20:30.272: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.273: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 11 16:20:30.273: INFO: Checking APIGroup: scheduling.k8s.io Jan 11 16:20:30.274: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 11 16:20:30.274: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.274: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 11 16:20:30.274: INFO: Checking APIGroup: coordination.k8s.io Jan 11 16:20:30.276: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 11 16:20:30.276: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.276: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 11 16:20:30.276: INFO: Checking APIGroup: node.k8s.io Jan 11 16:20:30.278: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 11 16:20:30.278: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.278: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 11 16:20:30.278: INFO: Checking APIGroup: discovery.k8s.io Jan 11 16:20:30.280: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Jan 11 16:20:30.280: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.280: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Jan 11 16:20:30.280: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 11 16:20:30.281: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jan 11 16:20:30.282: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 11 16:20:30.282: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Jan 11 16:20:30.282: INFO: Checking APIGroup: pingcap.com Jan 11 16:20:30.283: INFO: PreferredVersion.GroupVersion: pingcap.com/v1alpha1 Jan 11 16:20:30.283: INFO: Versions found [{pingcap.com/v1alpha1 v1alpha1}] Jan 11 16:20:30.283: INFO: pingcap.com/v1alpha1 matches pingcap.com/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:20:30.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-5349" for this suite. • [SLOW TEST:10.780 seconds] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":309,"completed":26,"skipped":589,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:20:30.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating cluster-info Jan 11 16:20:30.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3014 cluster-info' Jan 11 16:20:31.574: INFO: stderr: "" Jan 11 16:20:31.574: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34747\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:20:31.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3014" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":309,"completed":27,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:20:31.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:20:31.731: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-951d9328-9fc4-42ed-a85f-d978f32cb522" in namespace "security-context-test-3475" to be "Succeeded or Failed" Jan 11 16:20:31.756: INFO: Pod "busybox-privileged-false-951d9328-9fc4-42ed-a85f-d978f32cb522": Phase="Pending", Reason="", readiness=false. Elapsed: 24.270592ms Jan 11 16:20:33.764: INFO: Pod "busybox-privileged-false-951d9328-9fc4-42ed-a85f-d978f32cb522": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032269399s Jan 11 16:20:35.771: INFO: Pod "busybox-privileged-false-951d9328-9fc4-42ed-a85f-d978f32cb522": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039500686s Jan 11 16:20:35.771: INFO: Pod "busybox-privileged-false-951d9328-9fc4-42ed-a85f-d978f32cb522" satisfied condition "Succeeded or Failed" Jan 11 16:20:35.794: INFO: Got logs for pod "busybox-privileged-false-951d9328-9fc4-42ed-a85f-d978f32cb522": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:20:35.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3475" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":28,"skipped":668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:20:35.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 11 16:20:38.996: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:20:39.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3209" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":309,"completed":29,"skipped":726,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:20:39.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 11 16:20:39.390: INFO: >>> kubeConfig: /root/.kube/config Jan 11 16:21:02.157: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:22:32.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1710" for this suite. • [SLOW TEST:113.779 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":309,"completed":30,"skipped":726,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:22:32.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 11 16:22:46.985: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:46.985: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:47.100799 10 log.go:181] (0xaefc930) (0xaefc9a0) Create stream I0111 16:22:47.101129 10 log.go:181] (0xaefc930) (0xaefc9a0) Stream added, broadcasting: 1 I0111 16:22:47.105399 10 log.go:181] (0xaefc930) Reply frame received for 1 I0111 16:22:47.105679 10 log.go:181] (0xaefc930) (0xaefcb60) Create stream I0111 16:22:47.105827 10 log.go:181] (0xaefc930) (0xaefcb60) Stream added, broadcasting: 3 I0111 16:22:47.107875 10 log.go:181] (0xaefc930) Reply frame received for 3 I0111 16:22:47.108072 10 log.go:181] (0xaefc930) (0xaf7e3f0) Create stream I0111 16:22:47.108165 10 log.go:181] (0xaefc930) (0xaf7e3f0) Stream added, broadcasting: 5 I0111 16:22:47.109883 10 log.go:181] (0xaefc930) Reply frame received for 5 I0111 16:22:47.193956 10 log.go:181] (0xaefc930) Data frame received for 3 I0111 16:22:47.194173 10 log.go:181] (0xaefcb60) (3) Data frame handling I0111 16:22:47.194275 10 log.go:181] (0xaefc930) Data frame received for 5 I0111 16:22:47.194410 10 log.go:181] (0xaf7e3f0) (5) Data frame handling I0111 16:22:47.194537 10 log.go:181] (0xaefcb60) (3) Data frame sent I0111 16:22:47.194706 10 log.go:181] (0xaefc930) Data frame received for 3 I0111 16:22:47.194835 10 log.go:181] (0xaefcb60) (3) Data frame handling I0111 16:22:47.195115 10 log.go:181] (0xaefc930) Data frame received for 1 I0111 16:22:47.195291 10 log.go:181] (0xaefc9a0) (1) Data frame handling I0111 16:22:47.195438 10 log.go:181] (0xaefc9a0) (1) Data frame sent I0111 16:22:47.195584 10 log.go:181] (0xaefc930) (0xaefc9a0) Stream removed, broadcasting: 1 I0111 16:22:47.195792 10 log.go:181] (0xaefc930) Go away received I0111 16:22:47.196321 10 log.go:181] (0xaefc930) (0xaefc9a0) Stream removed, broadcasting: 1 I0111 16:22:47.196544 10 log.go:181] (0xaefc930) (0xaefcb60) Stream removed, broadcasting: 3 I0111 16:22:47.196669 10 log.go:181] (0xaefc930) (0xaf7e3f0) Stream removed, broadcasting: 5 Jan 11 16:22:47.196: INFO: Exec stderr: "" Jan 11 16:22:47.197: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:47.197: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:47.305907 10 log.go:181] (0xaefcfc0) (0xaefd030) Create stream I0111 16:22:47.306049 10 log.go:181] (0xaefcfc0) (0xaefd030) Stream added, broadcasting: 1 I0111 16:22:47.310077 10 log.go:181] (0xaefcfc0) Reply frame received for 1 I0111 16:22:47.310286 10 log.go:181] (0xaefcfc0) (0xbb1e070) Create stream I0111 16:22:47.310386 10 log.go:181] (0xaefcfc0) (0xbb1e070) Stream added, broadcasting: 3 I0111 16:22:47.312018 10 log.go:181] (0xaefcfc0) Reply frame received for 3 I0111 16:22:47.312181 10 log.go:181] (0xaefcfc0) (0xbb1e230) Create stream I0111 16:22:47.312263 10 log.go:181] (0xaefcfc0) (0xbb1e230) Stream added, broadcasting: 5 I0111 16:22:47.313716 10 log.go:181] (0xaefcfc0) Reply frame received for 5 I0111 16:22:47.375406 10 log.go:181] (0xaefcfc0) Data frame received for 5 I0111 16:22:47.375618 10 log.go:181] (0xbb1e230) (5) Data frame handling I0111 16:22:47.375807 10 log.go:181] (0xaefcfc0) Data frame received for 3 I0111 16:22:47.376006 10 log.go:181] (0xbb1e070) (3) Data frame handling I0111 16:22:47.376176 10 log.go:181] (0xbb1e070) (3) Data frame sent I0111 16:22:47.376330 10 log.go:181] (0xaefcfc0) Data frame received for 3 I0111 16:22:47.376467 10 log.go:181] (0xbb1e070) (3) Data frame handling I0111 16:22:47.376639 10 log.go:181] (0xaefcfc0) Data frame received for 1 I0111 16:22:47.376754 10 log.go:181] (0xaefd030) (1) Data frame handling I0111 16:22:47.377086 10 log.go:181] (0xaefd030) (1) Data frame sent I0111 16:22:47.377233 10 log.go:181] (0xaefcfc0) (0xaefd030) Stream removed, broadcasting: 1 I0111 16:22:47.377415 10 log.go:181] (0xaefcfc0) Go away received I0111 16:22:47.377761 10 log.go:181] (0xaefcfc0) (0xaefd030) Stream removed, broadcasting: 1 I0111 16:22:47.377886 10 log.go:181] (0xaefcfc0) (0xbb1e070) Stream removed, broadcasting: 3 I0111 16:22:47.378061 10 log.go:181] (0xaefcfc0) (0xbb1e230) Stream removed, broadcasting: 5 Jan 11 16:22:47.378: INFO: Exec stderr: "" Jan 11 16:22:47.378: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:47.378: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:47.488159 10 log.go:181] (0xaefd570) (0xaefd5e0) Create stream I0111 16:22:47.488344 10 log.go:181] (0xaefd570) (0xaefd5e0) Stream added, broadcasting: 1 I0111 16:22:47.501803 10 log.go:181] (0xaefd570) Reply frame received for 1 I0111 16:22:47.502364 10 log.go:181] (0xaefd570) (0xbc72070) Create stream I0111 16:22:47.502528 10 log.go:181] (0xaefd570) (0xbc72070) Stream added, broadcasting: 3 I0111 16:22:47.505485 10 log.go:181] (0xaefd570) Reply frame received for 3 I0111 16:22:47.505843 10 log.go:181] (0xaefd570) (0xbd04070) Create stream I0111 16:22:47.506059 10 log.go:181] (0xaefd570) (0xbd04070) Stream added, broadcasting: 5 I0111 16:22:47.507757 10 log.go:181] (0xaefd570) Reply frame received for 5 I0111 16:22:47.567043 10 log.go:181] (0xaefd570) Data frame received for 3 I0111 16:22:47.567244 10 log.go:181] (0xbc72070) (3) Data frame handling I0111 16:22:47.567385 10 log.go:181] (0xaefd570) Data frame received for 5 I0111 16:22:47.567536 10 log.go:181] (0xbd04070) (5) Data frame handling I0111 16:22:47.567690 10 log.go:181] (0xbc72070) (3) Data frame sent I0111 16:22:47.567870 10 log.go:181] (0xaefd570) Data frame received for 3 I0111 16:22:47.567975 10 log.go:181] (0xbc72070) (3) Data frame handling I0111 16:22:47.569085 10 log.go:181] (0xaefd570) Data frame received for 1 I0111 16:22:47.569160 10 log.go:181] (0xaefd5e0) (1) Data frame handling I0111 16:22:47.569241 10 log.go:181] (0xaefd5e0) (1) Data frame sent I0111 16:22:47.569343 10 log.go:181] (0xaefd570) (0xaefd5e0) Stream removed, broadcasting: 1 I0111 16:22:47.569429 10 log.go:181] (0xaefd570) Go away received I0111 16:22:47.570111 10 log.go:181] (0xaefd570) (0xaefd5e0) Stream removed, broadcasting: 1 I0111 16:22:47.570220 10 log.go:181] (0xaefd570) (0xbc72070) Stream removed, broadcasting: 3 I0111 16:22:47.570298 10 log.go:181] (0xaefd570) (0xbd04070) Stream removed, broadcasting: 5 Jan 11 16:22:47.570: INFO: Exec stderr: "" Jan 11 16:22:47.570: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:47.570: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:47.669415 10 log.go:181] (0xbd04690) (0xbd04700) Create stream I0111 16:22:47.669524 10 log.go:181] (0xbd04690) (0xbd04700) Stream added, broadcasting: 1 I0111 16:22:47.672469 10 log.go:181] (0xbd04690) Reply frame received for 1 I0111 16:22:47.672672 10 log.go:181] (0xbd04690) (0xaefc150) Create stream I0111 16:22:47.672786 10 log.go:181] (0xbd04690) (0xaefc150) Stream added, broadcasting: 3 I0111 16:22:47.674672 10 log.go:181] (0xbd04690) Reply frame received for 3 I0111 16:22:47.674909 10 log.go:181] (0xbd04690) (0xbd048c0) Create stream I0111 16:22:47.675065 10 log.go:181] (0xbd04690) (0xbd048c0) Stream added, broadcasting: 5 I0111 16:22:47.676818 10 log.go:181] (0xbd04690) Reply frame received for 5 I0111 16:22:47.744704 10 log.go:181] (0xbd04690) Data frame received for 3 I0111 16:22:47.744940 10 log.go:181] (0xaefc150) (3) Data frame handling I0111 16:22:47.745078 10 log.go:181] (0xbd04690) Data frame received for 5 I0111 16:22:47.745318 10 log.go:181] (0xbd048c0) (5) Data frame handling I0111 16:22:47.745550 10 log.go:181] (0xaefc150) (3) Data frame sent I0111 16:22:47.745697 10 log.go:181] (0xbd04690) Data frame received for 3 I0111 16:22:47.745822 10 log.go:181] (0xaefc150) (3) Data frame handling I0111 16:22:47.745941 10 log.go:181] (0xbd04690) Data frame received for 1 I0111 16:22:47.746075 10 log.go:181] (0xbd04700) (1) Data frame handling I0111 16:22:47.746175 10 log.go:181] (0xbd04700) (1) Data frame sent I0111 16:22:47.746276 10 log.go:181] (0xbd04690) (0xbd04700) Stream removed, broadcasting: 1 I0111 16:22:47.746380 10 log.go:181] (0xbd04690) Go away received I0111 16:22:47.746786 10 log.go:181] (0xbd04690) (0xbd04700) Stream removed, broadcasting: 1 I0111 16:22:47.746952 10 log.go:181] (0xbd04690) (0xaefc150) Stream removed, broadcasting: 3 I0111 16:22:47.747069 10 log.go:181] (0xbd04690) (0xbd048c0) Stream removed, broadcasting: 5 Jan 11 16:22:47.747: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 11 16:22:47.747: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:47.747: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:47.853174 10 log.go:181] (0xaefc4d0) (0xaefc5b0) Create stream I0111 16:22:47.853416 10 log.go:181] (0xaefc4d0) (0xaefc5b0) Stream added, broadcasting: 1 I0111 16:22:47.856823 10 log.go:181] (0xaefc4d0) Reply frame received for 1 I0111 16:22:47.857174 10 log.go:181] (0xaefc4d0) (0xaefc770) Create stream I0111 16:22:47.857343 10 log.go:181] (0xaefc4d0) (0xaefc770) Stream added, broadcasting: 3 I0111 16:22:47.859271 10 log.go:181] (0xaefc4d0) Reply frame received for 3 I0111 16:22:47.859522 10 log.go:181] (0xaefc4d0) (0xaefc930) Create stream I0111 16:22:47.859698 10 log.go:181] (0xaefc4d0) (0xaefc930) Stream added, broadcasting: 5 I0111 16:22:47.861570 10 log.go:181] (0xaefc4d0) Reply frame received for 5 I0111 16:22:47.921874 10 log.go:181] (0xaefc4d0) Data frame received for 3 I0111 16:22:47.922124 10 log.go:181] (0xaefc770) (3) Data frame handling I0111 16:22:47.922284 10 log.go:181] (0xaefc4d0) Data frame received for 5 I0111 16:22:47.922390 10 log.go:181] (0xaefc930) (5) Data frame handling I0111 16:22:47.922634 10 log.go:181] (0xaefc770) (3) Data frame sent I0111 16:22:47.922909 10 log.go:181] (0xaefc4d0) Data frame received for 3 I0111 16:22:47.923080 10 log.go:181] (0xaefc4d0) Data frame received for 1 I0111 16:22:47.923247 10 log.go:181] (0xaefc5b0) (1) Data frame handling I0111 16:22:47.923416 10 log.go:181] (0xaefc770) (3) Data frame handling I0111 16:22:47.923603 10 log.go:181] (0xaefc5b0) (1) Data frame sent I0111 16:22:47.923757 10 log.go:181] (0xaefc4d0) (0xaefc5b0) Stream removed, broadcasting: 1 I0111 16:22:47.923940 10 log.go:181] (0xaefc4d0) Go away received I0111 16:22:47.924368 10 log.go:181] (0xaefc4d0) (0xaefc5b0) Stream removed, broadcasting: 1 I0111 16:22:47.924483 10 log.go:181] (0xaefc4d0) (0xaefc770) Stream removed, broadcasting: 3 I0111 16:22:47.924605 10 log.go:181] (0xaefc4d0) (0xaefc930) Stream removed, broadcasting: 5 Jan 11 16:22:47.924: INFO: Exec stderr: "" Jan 11 16:22:47.924: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:47.925: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:48.031519 10 log.go:181] (0xaefcee0) (0xaefcf50) Create stream I0111 16:22:48.031635 10 log.go:181] (0xaefcee0) (0xaefcf50) Stream added, broadcasting: 1 I0111 16:22:48.034939 10 log.go:181] (0xaefcee0) Reply frame received for 1 I0111 16:22:48.035061 10 log.go:181] (0xaefcee0) (0xb89c150) Create stream I0111 16:22:48.035128 10 log.go:181] (0xaefcee0) (0xb89c150) Stream added, broadcasting: 3 I0111 16:22:48.036631 10 log.go:181] (0xaefcee0) Reply frame received for 3 I0111 16:22:48.036959 10 log.go:181] (0xaefcee0) (0x6ad1b20) Create stream I0111 16:22:48.037112 10 log.go:181] (0xaefcee0) (0x6ad1b20) Stream added, broadcasting: 5 I0111 16:22:48.038833 10 log.go:181] (0xaefcee0) Reply frame received for 5 I0111 16:22:48.099830 10 log.go:181] (0xaefcee0) Data frame received for 3 I0111 16:22:48.100026 10 log.go:181] (0xaefcee0) Data frame received for 5 I0111 16:22:48.100273 10 log.go:181] (0x6ad1b20) (5) Data frame handling I0111 16:22:48.100461 10 log.go:181] (0xb89c150) (3) Data frame handling I0111 16:22:48.100634 10 log.go:181] (0xb89c150) (3) Data frame sent I0111 16:22:48.100759 10 log.go:181] (0xaefcee0) Data frame received for 3 I0111 16:22:48.100983 10 log.go:181] (0xaefcee0) Data frame received for 1 I0111 16:22:48.101214 10 log.go:181] (0xaefcf50) (1) Data frame handling I0111 16:22:48.101399 10 log.go:181] (0xb89c150) (3) Data frame handling I0111 16:22:48.101658 10 log.go:181] (0xaefcf50) (1) Data frame sent I0111 16:22:48.101846 10 log.go:181] (0xaefcee0) (0xaefcf50) Stream removed, broadcasting: 1 I0111 16:22:48.102041 10 log.go:181] (0xaefcee0) Go away received I0111 16:22:48.102530 10 log.go:181] (0xaefcee0) (0xaefcf50) Stream removed, broadcasting: 1 I0111 16:22:48.102741 10 log.go:181] (0xaefcee0) (0xb89c150) Stream removed, broadcasting: 3 I0111 16:22:48.102906 10 log.go:181] (0xaefcee0) (0x6ad1b20) Stream removed, broadcasting: 5 Jan 11 16:22:48.102: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 11 16:22:48.103: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:48.103: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:48.214925 10 log.go:181] (0x8b3d420) (0x8b3d6c0) Create stream I0111 16:22:48.215131 10 log.go:181] (0x8b3d420) (0x8b3d6c0) Stream added, broadcasting: 1 I0111 16:22:48.219089 10 log.go:181] (0x8b3d420) Reply frame received for 1 I0111 16:22:48.219272 10 log.go:181] (0x8b3d420) (0xaefd2d0) Create stream I0111 16:22:48.219367 10 log.go:181] (0x8b3d420) (0xaefd2d0) Stream added, broadcasting: 3 I0111 16:22:48.220823 10 log.go:181] (0x8b3d420) Reply frame received for 3 I0111 16:22:48.221048 10 log.go:181] (0x8b3d420) (0xb89c4d0) Create stream I0111 16:22:48.221121 10 log.go:181] (0x8b3d420) (0xb89c4d0) Stream added, broadcasting: 5 I0111 16:22:48.222293 10 log.go:181] (0x8b3d420) Reply frame received for 5 I0111 16:22:48.280260 10 log.go:181] (0x8b3d420) Data frame received for 3 I0111 16:22:48.280407 10 log.go:181] (0xaefd2d0) (3) Data frame handling I0111 16:22:48.280566 10 log.go:181] (0xaefd2d0) (3) Data frame sent I0111 16:22:48.280674 10 log.go:181] (0x8b3d420) Data frame received for 3 I0111 16:22:48.280805 10 log.go:181] (0xaefd2d0) (3) Data frame handling I0111 16:22:48.280990 10 log.go:181] (0x8b3d420) Data frame received for 5 I0111 16:22:48.281104 10 log.go:181] (0xb89c4d0) (5) Data frame handling I0111 16:22:48.283723 10 log.go:181] (0x8b3d420) Data frame received for 1 I0111 16:22:48.283824 10 log.go:181] (0x8b3d6c0) (1) Data frame handling I0111 16:22:48.283911 10 log.go:181] (0x8b3d6c0) (1) Data frame sent I0111 16:22:48.284048 10 log.go:181] (0x8b3d420) (0x8b3d6c0) Stream removed, broadcasting: 1 I0111 16:22:48.284178 10 log.go:181] (0x8b3d420) Go away received I0111 16:22:48.284455 10 log.go:181] (0x8b3d420) (0x8b3d6c0) Stream removed, broadcasting: 1 I0111 16:22:48.284574 10 log.go:181] (0x8b3d420) (0xaefd2d0) Stream removed, broadcasting: 3 I0111 16:22:48.284683 10 log.go:181] (0x8b3d420) (0xb89c4d0) Stream removed, broadcasting: 5 Jan 11 16:22:48.284: INFO: Exec stderr: "" Jan 11 16:22:48.284: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:48.285: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:48.428496 10 log.go:181] (0xbc72cb0) (0xbc72d20) Create stream I0111 16:22:48.428686 10 log.go:181] (0xbc72cb0) (0xbc72d20) Stream added, broadcasting: 1 I0111 16:22:48.432424 10 log.go:181] (0xbc72cb0) Reply frame received for 1 I0111 16:22:48.432602 10 log.go:181] (0xbc72cb0) (0x7d0ea80) Create stream I0111 16:22:48.432664 10 log.go:181] (0xbc72cb0) (0x7d0ea80) Stream added, broadcasting: 3 I0111 16:22:48.434477 10 log.go:181] (0xbc72cb0) Reply frame received for 3 I0111 16:22:48.434726 10 log.go:181] (0xbc72cb0) (0x7574a10) Create stream I0111 16:22:48.434841 10 log.go:181] (0xbc72cb0) (0x7574a10) Stream added, broadcasting: 5 I0111 16:22:48.436640 10 log.go:181] (0xbc72cb0) Reply frame received for 5 I0111 16:22:48.507384 10 log.go:181] (0xbc72cb0) Data frame received for 3 I0111 16:22:48.507573 10 log.go:181] (0x7d0ea80) (3) Data frame handling I0111 16:22:48.507728 10 log.go:181] (0xbc72cb0) Data frame received for 5 I0111 16:22:48.507970 10 log.go:181] (0x7574a10) (5) Data frame handling I0111 16:22:48.508151 10 log.go:181] (0x7d0ea80) (3) Data frame sent I0111 16:22:48.508329 10 log.go:181] (0xbc72cb0) Data frame received for 3 I0111 16:22:48.508436 10 log.go:181] (0x7d0ea80) (3) Data frame handling I0111 16:22:48.508644 10 log.go:181] (0xbc72cb0) Data frame received for 1 I0111 16:22:48.508810 10 log.go:181] (0xbc72d20) (1) Data frame handling I0111 16:22:48.509049 10 log.go:181] (0xbc72d20) (1) Data frame sent I0111 16:22:48.509216 10 log.go:181] (0xbc72cb0) (0xbc72d20) Stream removed, broadcasting: 1 I0111 16:22:48.509456 10 log.go:181] (0xbc72cb0) Go away received I0111 16:22:48.509911 10 log.go:181] (0xbc72cb0) (0xbc72d20) Stream removed, broadcasting: 1 I0111 16:22:48.510145 10 log.go:181] (0xbc72cb0) (0x7d0ea80) Stream removed, broadcasting: 3 I0111 16:22:48.510296 10 log.go:181] (0xbc72cb0) (0x7574a10) Stream removed, broadcasting: 5 Jan 11 16:22:48.510: INFO: Exec stderr: "" Jan 11 16:22:48.510: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:48.510: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:48.619705 10 log.go:181] (0xaefd9d0) (0xaefda40) Create stream I0111 16:22:48.619843 10 log.go:181] (0xaefd9d0) (0xaefda40) Stream added, broadcasting: 1 I0111 16:22:48.625310 10 log.go:181] (0xaefd9d0) Reply frame received for 1 I0111 16:22:48.625479 10 log.go:181] (0xaefd9d0) (0xaefdc00) Create stream I0111 16:22:48.625564 10 log.go:181] (0xaefd9d0) (0xaefdc00) Stream added, broadcasting: 3 I0111 16:22:48.627522 10 log.go:181] (0xaefd9d0) Reply frame received for 3 I0111 16:22:48.627674 10 log.go:181] (0xaefd9d0) (0xbc73030) Create stream I0111 16:22:48.627777 10 log.go:181] (0xaefd9d0) (0xbc73030) Stream added, broadcasting: 5 I0111 16:22:48.629504 10 log.go:181] (0xaefd9d0) Reply frame received for 5 I0111 16:22:48.697734 10 log.go:181] (0xaefd9d0) Data frame received for 5 I0111 16:22:48.697982 10 log.go:181] (0xbc73030) (5) Data frame handling I0111 16:22:48.698129 10 log.go:181] (0xaefd9d0) Data frame received for 3 I0111 16:22:48.698274 10 log.go:181] (0xaefdc00) (3) Data frame handling I0111 16:22:48.698404 10 log.go:181] (0xaefdc00) (3) Data frame sent I0111 16:22:48.698507 10 log.go:181] (0xaefd9d0) Data frame received for 3 I0111 16:22:48.698642 10 log.go:181] (0xaefdc00) (3) Data frame handling I0111 16:22:48.699275 10 log.go:181] (0xaefd9d0) Data frame received for 1 I0111 16:22:48.699425 10 log.go:181] (0xaefda40) (1) Data frame handling I0111 16:22:48.699603 10 log.go:181] (0xaefda40) (1) Data frame sent I0111 16:22:48.699748 10 log.go:181] (0xaefd9d0) (0xaefda40) Stream removed, broadcasting: 1 I0111 16:22:48.699942 10 log.go:181] (0xaefd9d0) Go away received I0111 16:22:48.700391 10 log.go:181] (0xaefd9d0) (0xaefda40) Stream removed, broadcasting: 1 I0111 16:22:48.700549 10 log.go:181] (0xaefd9d0) (0xaefdc00) Stream removed, broadcasting: 3 I0111 16:22:48.700675 10 log.go:181] (0xaefd9d0) (0xbc73030) Stream removed, broadcasting: 5 Jan 11 16:22:48.700: INFO: Exec stderr: "" Jan 11 16:22:48.701: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9536 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:22:48.701: INFO: >>> kubeConfig: /root/.kube/config I0111 16:22:48.813721 10 log.go:181] (0xaefdd50) (0xaefddc0) Create stream I0111 16:22:48.813963 10 log.go:181] (0xaefdd50) (0xaefddc0) Stream added, broadcasting: 1 I0111 16:22:48.818374 10 log.go:181] (0xaefdd50) Reply frame received for 1 I0111 16:22:48.818616 10 log.go:181] (0xaefdd50) (0xaefdf80) Create stream I0111 16:22:48.818702 10 log.go:181] (0xaefdd50) (0xaefdf80) Stream added, broadcasting: 3 I0111 16:22:48.820233 10 log.go:181] (0xaefdd50) Reply frame received for 3 I0111 16:22:48.820417 10 log.go:181] (0xaefdd50) (0xbc738f0) Create stream I0111 16:22:48.820488 10 log.go:181] (0xaefdd50) (0xbc738f0) Stream added, broadcasting: 5 I0111 16:22:48.822041 10 log.go:181] (0xaefdd50) Reply frame received for 5 I0111 16:22:48.883814 10 log.go:181] (0xaefdd50) Data frame received for 3 I0111 16:22:48.884078 10 log.go:181] (0xaefdf80) (3) Data frame handling I0111 16:22:48.884274 10 log.go:181] (0xaefdd50) Data frame received for 5 I0111 16:22:48.884459 10 log.go:181] (0xbc738f0) (5) Data frame handling I0111 16:22:48.884586 10 log.go:181] (0xaefdf80) (3) Data frame sent I0111 16:22:48.884739 10 log.go:181] (0xaefdd50) Data frame received for 3 I0111 16:22:48.884999 10 log.go:181] (0xaefdf80) (3) Data frame handling I0111 16:22:48.885143 10 log.go:181] (0xaefdd50) Data frame received for 1 I0111 16:22:48.885228 10 log.go:181] (0xaefddc0) (1) Data frame handling I0111 16:22:48.885326 10 log.go:181] (0xaefddc0) (1) Data frame sent I0111 16:22:48.885448 10 log.go:181] (0xaefdd50) (0xaefddc0) Stream removed, broadcasting: 1 I0111 16:22:48.885565 10 log.go:181] (0xaefdd50) Go away received I0111 16:22:48.886265 10 log.go:181] (0xaefdd50) (0xaefddc0) Stream removed, broadcasting: 1 I0111 16:22:48.886361 10 log.go:181] (0xaefdd50) (0xaefdf80) Stream removed, broadcasting: 3 I0111 16:22:48.886455 10 log.go:181] (0xaefdd50) (0xbc738f0) Stream removed, broadcasting: 5 Jan 11 16:22:48.886: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:22:48.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9536" for this suite. • [SLOW TEST:16.163 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":31,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:22:48.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:22:49.307: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9768 I0111 16:22:49.375554 10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9768, replica count: 1 I0111 16:22:50.427459 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:22:51.428432 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:22:52.429075 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:22:53.430708 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 16:22:53.578: INFO: Created: latency-svc-cks28 Jan 11 16:22:53.595: INFO: Got endpoints: latency-svc-cks28 [60.449232ms] Jan 11 16:22:53.665: INFO: Created: latency-svc-hhm26 Jan 11 16:22:53.677: INFO: Got endpoints: latency-svc-hhm26 [81.060935ms] Jan 11 16:22:53.694: INFO: Created: latency-svc-s7stq Jan 11 16:22:53.719: INFO: Got endpoints: latency-svc-s7stq [123.120208ms] Jan 11 16:22:53.762: INFO: Created: latency-svc-ct6k2 Jan 11 16:22:53.811: INFO: Created: latency-svc-9vm9w Jan 11 16:22:53.812: INFO: Got endpoints: latency-svc-ct6k2 [216.022096ms] Jan 11 16:22:53.883: INFO: Got endpoints: latency-svc-9vm9w [284.647371ms] Jan 11 16:22:53.892: INFO: Created: latency-svc-qwlsp Jan 11 16:22:53.931: INFO: Got endpoints: latency-svc-qwlsp [333.594082ms] Jan 11 16:22:54.050: INFO: Created: latency-svc-d4rxd Jan 11 16:22:54.073: INFO: Got endpoints: latency-svc-d4rxd [475.334857ms] Jan 11 16:22:54.073: INFO: Created: latency-svc-jlmkq Jan 11 16:22:54.170: INFO: Got endpoints: latency-svc-jlmkq [572.091518ms] Jan 11 16:22:54.194: INFO: Created: latency-svc-mxb4g Jan 11 16:22:54.223: INFO: Got endpoints: latency-svc-mxb4g [625.461865ms] Jan 11 16:22:54.241: INFO: Created: latency-svc-865zn Jan 11 16:22:54.324: INFO: Got endpoints: latency-svc-865zn [726.220852ms] Jan 11 16:22:54.325: INFO: Created: latency-svc-mrd46 Jan 11 16:22:54.330: INFO: Got endpoints: latency-svc-mrd46 [732.014069ms] Jan 11 16:22:54.360: INFO: Created: latency-svc-cxxqp Jan 11 16:22:54.373: INFO: Got endpoints: latency-svc-cxxqp [775.009384ms] Jan 11 16:22:54.415: INFO: Created: latency-svc-czz85 Jan 11 16:22:54.443: INFO: Got endpoints: latency-svc-czz85 [846.800992ms] Jan 11 16:22:54.457: INFO: Created: latency-svc-fkf7j Jan 11 16:22:54.488: INFO: Got endpoints: latency-svc-fkf7j [889.808183ms] Jan 11 16:22:54.518: INFO: Created: latency-svc-pj9c2 Jan 11 16:22:54.529: INFO: Got endpoints: latency-svc-pj9c2 [930.901773ms] Jan 11 16:22:54.587: INFO: Created: latency-svc-cc8q7 Jan 11 16:22:54.626: INFO: Got endpoints: latency-svc-cc8q7 [1.02846039s] Jan 11 16:22:54.722: INFO: Created: latency-svc-8shjp Jan 11 16:22:54.733: INFO: Got endpoints: latency-svc-8shjp [1.055687749s] Jan 11 16:22:54.890: INFO: Created: latency-svc-z7fz2 Jan 11 16:22:54.894: INFO: Got endpoints: latency-svc-z7fz2 [1.174662985s] Jan 11 16:22:54.954: INFO: Created: latency-svc-j5pnn Jan 11 16:22:54.960: INFO: Got endpoints: latency-svc-j5pnn [1.147655924s] Jan 11 16:22:55.026: INFO: Created: latency-svc-jb7sq Jan 11 16:22:55.039: INFO: Got endpoints: latency-svc-jb7sq [1.155860659s] Jan 11 16:22:55.061: INFO: Created: latency-svc-fdzqw Jan 11 16:22:55.081: INFO: Got endpoints: latency-svc-fdzqw [1.149977971s] Jan 11 16:22:55.163: INFO: Created: latency-svc-7sz9m Jan 11 16:22:55.183: INFO: Created: latency-svc-sspvg Jan 11 16:22:55.184: INFO: Got endpoints: latency-svc-7sz9m [1.110858478s] Jan 11 16:22:55.215: INFO: Got endpoints: latency-svc-sspvg [1.045045698s] Jan 11 16:22:55.255: INFO: Created: latency-svc-2glcc Jan 11 16:22:55.306: INFO: Got endpoints: latency-svc-2glcc [1.082454696s] Jan 11 16:22:55.332: INFO: Created: latency-svc-w2dll Jan 11 16:22:55.356: INFO: Got endpoints: latency-svc-w2dll [1.031315455s] Jan 11 16:22:55.375: INFO: Created: latency-svc-t4zkl Jan 11 16:22:55.392: INFO: Got endpoints: latency-svc-t4zkl [1.061688349s] Jan 11 16:22:55.455: INFO: Created: latency-svc-fxxbz Jan 11 16:22:55.478: INFO: Got endpoints: latency-svc-fxxbz [1.104257082s] Jan 11 16:22:55.478: INFO: Created: latency-svc-66lfv Jan 11 16:22:55.499: INFO: Got endpoints: latency-svc-66lfv [1.056062254s] Jan 11 16:22:55.541: INFO: Created: latency-svc-vv7lm Jan 11 16:22:55.606: INFO: Got endpoints: latency-svc-vv7lm [1.118041955s] Jan 11 16:22:55.645: INFO: Created: latency-svc-p8qbl Jan 11 16:22:55.669: INFO: Got endpoints: latency-svc-p8qbl [1.139379034s] Jan 11 16:22:55.687: INFO: Created: latency-svc-58ksf Jan 11 16:22:55.698: INFO: Got endpoints: latency-svc-58ksf [1.072570448s] Jan 11 16:22:55.753: INFO: Created: latency-svc-46bj6 Jan 11 16:22:55.764: INFO: Got endpoints: latency-svc-46bj6 [1.030193218s] Jan 11 16:22:55.793: INFO: Created: latency-svc-vwx62 Jan 11 16:22:55.805: INFO: Got endpoints: latency-svc-vwx62 [910.565723ms] Jan 11 16:22:55.876: INFO: Created: latency-svc-7f7ms Jan 11 16:22:55.882: INFO: Got endpoints: latency-svc-7f7ms [921.777345ms] Jan 11 16:22:55.909: INFO: Created: latency-svc-zbp2w Jan 11 16:22:55.931: INFO: Got endpoints: latency-svc-zbp2w [891.677747ms] Jan 11 16:22:55.968: INFO: Created: latency-svc-ffhcx Jan 11 16:22:56.018: INFO: Got endpoints: latency-svc-ffhcx [936.930576ms] Jan 11 16:22:56.060: INFO: Created: latency-svc-b7vkr Jan 11 16:22:56.089: INFO: Got endpoints: latency-svc-b7vkr [904.836238ms] Jan 11 16:22:56.144: INFO: Created: latency-svc-2wkjh Jan 11 16:22:56.158: INFO: Got endpoints: latency-svc-2wkjh [942.461791ms] Jan 11 16:22:56.203: INFO: Created: latency-svc-dwlp2 Jan 11 16:22:56.218: INFO: Got endpoints: latency-svc-dwlp2 [912.204012ms] Jan 11 16:22:56.286: INFO: Created: latency-svc-p8kxj Jan 11 16:22:56.302: INFO: Got endpoints: latency-svc-p8kxj [945.741369ms] Jan 11 16:22:56.359: INFO: Created: latency-svc-z8jdz Jan 11 16:22:56.396: INFO: Got endpoints: latency-svc-z8jdz [1.004100271s] Jan 11 16:22:56.430: INFO: Created: latency-svc-wbdwf Jan 11 16:22:56.441: INFO: Got endpoints: latency-svc-wbdwf [962.879512ms] Jan 11 16:22:56.491: INFO: Created: latency-svc-nbvl6 Jan 11 16:22:56.510: INFO: Got endpoints: latency-svc-nbvl6 [1.011327206s] Jan 11 16:22:56.527: INFO: Created: latency-svc-llt8n Jan 11 16:22:56.535: INFO: Got endpoints: latency-svc-llt8n [928.354234ms] Jan 11 16:22:56.556: INFO: Created: latency-svc-r9vvs Jan 11 16:22:56.582: INFO: Got endpoints: latency-svc-r9vvs [912.989589ms] Jan 11 16:22:56.642: INFO: Created: latency-svc-sdfrg Jan 11 16:22:56.665: INFO: Got endpoints: latency-svc-sdfrg [965.758193ms] Jan 11 16:22:56.695: INFO: Created: latency-svc-prjbd Jan 11 16:22:56.708: INFO: Got endpoints: latency-svc-prjbd [944.473912ms] Jan 11 16:22:56.775: INFO: Created: latency-svc-bsrhz Jan 11 16:22:56.781: INFO: Got endpoints: latency-svc-bsrhz [975.957856ms] Jan 11 16:22:56.801: INFO: Created: latency-svc-q9gnh Jan 11 16:22:56.818: INFO: Got endpoints: latency-svc-q9gnh [935.981606ms] Jan 11 16:22:56.912: INFO: Created: latency-svc-brwr2 Jan 11 16:22:56.920: INFO: Got endpoints: latency-svc-brwr2 [988.222322ms] Jan 11 16:22:56.953: INFO: Created: latency-svc-7csbm Jan 11 16:22:56.967: INFO: Got endpoints: latency-svc-7csbm [948.16783ms] Jan 11 16:22:57.055: INFO: Created: latency-svc-86tkz Jan 11 16:22:57.071: INFO: Got endpoints: latency-svc-86tkz [981.972499ms] Jan 11 16:22:57.109: INFO: Created: latency-svc-6t5wm Jan 11 16:22:57.117: INFO: Got endpoints: latency-svc-6t5wm [959.026812ms] Jan 11 16:22:57.133: INFO: Created: latency-svc-mgjpx Jan 11 16:22:57.188: INFO: Got endpoints: latency-svc-mgjpx [969.068308ms] Jan 11 16:22:57.204: INFO: Created: latency-svc-psts7 Jan 11 16:22:57.213: INFO: Got endpoints: latency-svc-psts7 [910.533946ms] Jan 11 16:22:57.228: INFO: Created: latency-svc-5dsd2 Jan 11 16:22:57.237: INFO: Got endpoints: latency-svc-5dsd2 [840.265673ms] Jan 11 16:22:57.276: INFO: Created: latency-svc-ztzc5 Jan 11 16:22:57.343: INFO: Got endpoints: latency-svc-ztzc5 [902.159091ms] Jan 11 16:22:57.373: INFO: Created: latency-svc-sr42j Jan 11 16:22:57.387: INFO: Got endpoints: latency-svc-sr42j [876.032166ms] Jan 11 16:22:57.413: INFO: Created: latency-svc-pkhtn Jan 11 16:22:57.481: INFO: Got endpoints: latency-svc-pkhtn [945.797507ms] Jan 11 16:22:57.503: INFO: Created: latency-svc-s2p94 Jan 11 16:22:57.519: INFO: Got endpoints: latency-svc-s2p94 [936.588093ms] Jan 11 16:22:57.565: INFO: Created: latency-svc-trwmm Jan 11 16:22:57.578: INFO: Got endpoints: latency-svc-trwmm [912.868285ms] Jan 11 16:22:57.653: INFO: Created: latency-svc-mcl79 Jan 11 16:22:57.681: INFO: Got endpoints: latency-svc-mcl79 [972.313994ms] Jan 11 16:22:57.769: INFO: Created: latency-svc-v2z29 Jan 11 16:22:57.787: INFO: Created: latency-svc-86tmf Jan 11 16:22:57.788: INFO: Got endpoints: latency-svc-v2z29 [1.007001019s] Jan 11 16:22:57.824: INFO: Got endpoints: latency-svc-86tmf [1.006051326s] Jan 11 16:22:57.893: INFO: Created: latency-svc-mw78v Jan 11 16:22:57.931: INFO: Created: latency-svc-56vfn Jan 11 16:22:57.932: INFO: Got endpoints: latency-svc-mw78v [1.012403552s] Jan 11 16:22:57.949: INFO: Got endpoints: latency-svc-56vfn [982.545883ms] Jan 11 16:22:58.031: INFO: Created: latency-svc-mkfzf Jan 11 16:22:58.039: INFO: Got endpoints: latency-svc-mkfzf [967.69797ms] Jan 11 16:22:58.085: INFO: Created: latency-svc-dwtr9 Jan 11 16:22:58.099: INFO: Got endpoints: latency-svc-dwtr9 [981.963249ms] Jan 11 16:22:58.193: INFO: Created: latency-svc-ddx5v Jan 11 16:22:58.206: INFO: Got endpoints: latency-svc-ddx5v [1.017748832s] Jan 11 16:22:58.237: INFO: Created: latency-svc-m8ktl Jan 11 16:22:58.249: INFO: Got endpoints: latency-svc-m8ktl [1.036635223s] Jan 11 16:22:58.267: INFO: Created: latency-svc-q7ww9 Jan 11 16:22:58.278: INFO: Got endpoints: latency-svc-q7ww9 [1.040987809s] Jan 11 16:22:58.324: INFO: Created: latency-svc-5clsv Jan 11 16:22:58.356: INFO: Got endpoints: latency-svc-5clsv [1.012432523s] Jan 11 16:22:58.357: INFO: Created: latency-svc-mk7fr Jan 11 16:22:58.399: INFO: Got endpoints: latency-svc-mk7fr [1.011517682s] Jan 11 16:22:58.463: INFO: Created: latency-svc-8qlrg Jan 11 16:22:58.482: INFO: Created: latency-svc-znxrd Jan 11 16:22:58.482: INFO: Got endpoints: latency-svc-8qlrg [1.00148216s] Jan 11 16:22:58.507: INFO: Got endpoints: latency-svc-znxrd [987.312299ms] Jan 11 16:22:58.529: INFO: Created: latency-svc-jb6pt Jan 11 16:22:58.544: INFO: Got endpoints: latency-svc-jb6pt [965.859007ms] Jan 11 16:22:58.558: INFO: Created: latency-svc-dpq77 Jan 11 16:22:58.589: INFO: Got endpoints: latency-svc-dpq77 [907.695968ms] Jan 11 16:22:58.601: INFO: Created: latency-svc-swxks Jan 11 16:22:58.643: INFO: Got endpoints: latency-svc-swxks [855.127901ms] Jan 11 16:22:58.667: INFO: Created: latency-svc-tkdbr Jan 11 16:22:58.686: INFO: Got endpoints: latency-svc-tkdbr [861.716682ms] Jan 11 16:22:58.750: INFO: Created: latency-svc-5g7kq Jan 11 16:22:58.764: INFO: Created: latency-svc-ssbvd Jan 11 16:22:58.764: INFO: Got endpoints: latency-svc-5g7kq [831.781686ms] Jan 11 16:22:58.799: INFO: Got endpoints: latency-svc-ssbvd [849.501677ms] Jan 11 16:22:58.829: INFO: Created: latency-svc-sdcg9 Jan 11 16:22:58.896: INFO: Got endpoints: latency-svc-sdcg9 [856.989589ms] Jan 11 16:22:59.275: INFO: Created: latency-svc-67v4k Jan 11 16:22:59.433: INFO: Got endpoints: latency-svc-67v4k [1.334189406s] Jan 11 16:22:59.622: INFO: Created: latency-svc-nzwzx Jan 11 16:22:59.633: INFO: Got endpoints: latency-svc-nzwzx [1.426782624s] Jan 11 16:22:59.712: INFO: Created: latency-svc-tc6tk Jan 11 16:22:59.742: INFO: Got endpoints: latency-svc-tc6tk [1.49255396s] Jan 11 16:22:59.977: INFO: Created: latency-svc-5w87d Jan 11 16:23:00.030: INFO: Got endpoints: latency-svc-5w87d [1.75119495s] Jan 11 16:23:00.126: INFO: Created: latency-svc-7bj5m Jan 11 16:23:00.181: INFO: Got endpoints: latency-svc-7bj5m [438.223105ms] Jan 11 16:23:00.651: INFO: Created: latency-svc-9zpdr Jan 11 16:23:00.715: INFO: Got endpoints: latency-svc-9zpdr [2.359026709s] Jan 11 16:23:00.794: INFO: Created: latency-svc-hgfld Jan 11 16:23:00.808: INFO: Got endpoints: latency-svc-hgfld [2.408809204s] Jan 11 16:23:01.183: INFO: Created: latency-svc-fcg86 Jan 11 16:23:01.410: INFO: Got endpoints: latency-svc-fcg86 [2.927719609s] Jan 11 16:23:01.483: INFO: Created: latency-svc-pz52m Jan 11 16:23:01.515: INFO: Got endpoints: latency-svc-pz52m [3.0081861s] Jan 11 16:23:01.515: INFO: Created: latency-svc-rjq5x Jan 11 16:23:01.531: INFO: Got endpoints: latency-svc-rjq5x [2.986409029s] Jan 11 16:23:01.624: INFO: Created: latency-svc-8wlxg Jan 11 16:23:01.639: INFO: Got endpoints: latency-svc-8wlxg [3.049508181s] Jan 11 16:23:01.662: INFO: Created: latency-svc-k8ffl Jan 11 16:23:01.683: INFO: Got endpoints: latency-svc-k8ffl [3.039091017s] Jan 11 16:23:01.701: INFO: Created: latency-svc-nd2bt Jan 11 16:23:01.716: INFO: Got endpoints: latency-svc-nd2bt [3.029549337s] Jan 11 16:23:01.778: INFO: Created: latency-svc-cvvm5 Jan 11 16:23:01.790: INFO: Got endpoints: latency-svc-cvvm5 [3.02480383s] Jan 11 16:23:01.837: INFO: Created: latency-svc-2w596 Jan 11 16:23:01.899: INFO: Got endpoints: latency-svc-2w596 [3.099334413s] Jan 11 16:23:01.900: INFO: Created: latency-svc-fl6nn Jan 11 16:23:01.921: INFO: Got endpoints: latency-svc-fl6nn [3.025008383s] Jan 11 16:23:01.952: INFO: Created: latency-svc-vv26s Jan 11 16:23:01.981: INFO: Got endpoints: latency-svc-vv26s [2.546813644s] Jan 11 16:23:02.032: INFO: Created: latency-svc-78ltq Jan 11 16:23:02.067: INFO: Got endpoints: latency-svc-78ltq [2.433553825s] Jan 11 16:23:02.095: INFO: Created: latency-svc-m625m Jan 11 16:23:02.113: INFO: Got endpoints: latency-svc-m625m [2.083211278s] Jan 11 16:23:02.169: INFO: Created: latency-svc-j9vkm Jan 11 16:23:02.204: INFO: Created: latency-svc-47h86 Jan 11 16:23:02.205: INFO: Got endpoints: latency-svc-j9vkm [2.024107964s] Jan 11 16:23:02.253: INFO: Got endpoints: latency-svc-47h86 [1.537562052s] Jan 11 16:23:02.312: INFO: Created: latency-svc-pddx5 Jan 11 16:23:02.323: INFO: Got endpoints: latency-svc-pddx5 [1.515336726s] Jan 11 16:23:02.340: INFO: Created: latency-svc-7hbxr Jan 11 16:23:02.365: INFO: Got endpoints: latency-svc-7hbxr [954.509641ms] Jan 11 16:23:02.396: INFO: Created: latency-svc-n6c58 Jan 11 16:23:02.438: INFO: Got endpoints: latency-svc-n6c58 [923.014191ms] Jan 11 16:23:02.440: INFO: Created: latency-svc-zbpk5 Jan 11 16:23:02.460: INFO: Got endpoints: latency-svc-zbpk5 [929.554861ms] Jan 11 16:23:02.461: INFO: Created: latency-svc-zscg7 Jan 11 16:23:02.491: INFO: Got endpoints: latency-svc-zscg7 [852.478171ms] Jan 11 16:23:02.521: INFO: Created: latency-svc-q45g5 Jan 11 16:23:02.571: INFO: Got endpoints: latency-svc-q45g5 [887.615245ms] Jan 11 16:23:02.582: INFO: Created: latency-svc-mpck6 Jan 11 16:23:02.597: INFO: Got endpoints: latency-svc-mpck6 [880.759155ms] Jan 11 16:23:02.617: INFO: Created: latency-svc-pkbqk Jan 11 16:23:02.634: INFO: Got endpoints: latency-svc-pkbqk [843.781268ms] Jan 11 16:23:02.654: INFO: Created: latency-svc-rrw6f Jan 11 16:23:02.709: INFO: Got endpoints: latency-svc-rrw6f [809.905024ms] Jan 11 16:23:02.710: INFO: Created: latency-svc-j8964 Jan 11 16:23:02.716: INFO: Got endpoints: latency-svc-j8964 [794.655747ms] Jan 11 16:23:02.742: INFO: Created: latency-svc-m4jcp Jan 11 16:23:02.766: INFO: Got endpoints: latency-svc-m4jcp [784.816744ms] Jan 11 16:23:02.798: INFO: Created: latency-svc-rsjvk Jan 11 16:23:02.833: INFO: Got endpoints: latency-svc-rsjvk [765.734736ms] Jan 11 16:23:02.852: INFO: Created: latency-svc-mwdx5 Jan 11 16:23:02.863: INFO: Got endpoints: latency-svc-mwdx5 [750.082683ms] Jan 11 16:23:02.875: INFO: Created: latency-svc-8nmql Jan 11 16:23:02.885: INFO: Got endpoints: latency-svc-8nmql [680.141707ms] Jan 11 16:23:02.899: INFO: Created: latency-svc-sd895 Jan 11 16:23:02.910: INFO: Got endpoints: latency-svc-sd895 [656.579355ms] Jan 11 16:23:02.929: INFO: Created: latency-svc-28l68 Jan 11 16:23:02.966: INFO: Got endpoints: latency-svc-28l68 [641.987619ms] Jan 11 16:23:02.988: INFO: Created: latency-svc-zx8k4 Jan 11 16:23:03.005: INFO: Got endpoints: latency-svc-zx8k4 [640.005094ms] Jan 11 16:23:03.045: INFO: Created: latency-svc-jxhj7 Jan 11 16:23:03.146: INFO: Got endpoints: latency-svc-jxhj7 [707.212194ms] Jan 11 16:23:03.154: INFO: Created: latency-svc-6wk8s Jan 11 16:23:03.192: INFO: Got endpoints: latency-svc-6wk8s [731.073167ms] Jan 11 16:23:03.314: INFO: Created: latency-svc-x2pz2 Jan 11 16:23:03.343: INFO: Created: latency-svc-z9xmh Jan 11 16:23:03.343: INFO: Got endpoints: latency-svc-x2pz2 [851.617971ms] Jan 11 16:23:03.359: INFO: Got endpoints: latency-svc-z9xmh [787.952578ms] Jan 11 16:23:03.379: INFO: Created: latency-svc-pt24q Jan 11 16:23:03.395: INFO: Got endpoints: latency-svc-pt24q [798.235214ms] Jan 11 16:23:03.456: INFO: Created: latency-svc-q5xzk Jan 11 16:23:03.476: INFO: Got endpoints: latency-svc-q5xzk [841.746245ms] Jan 11 16:23:03.477: INFO: Created: latency-svc-cvbjf Jan 11 16:23:03.512: INFO: Got endpoints: latency-svc-cvbjf [802.665814ms] Jan 11 16:23:03.547: INFO: Created: latency-svc-kc65w Jan 11 16:23:03.612: INFO: Got endpoints: latency-svc-kc65w [895.579104ms] Jan 11 16:23:03.644: INFO: Created: latency-svc-jv4dq Jan 11 16:23:03.680: INFO: Got endpoints: latency-svc-jv4dq [913.690539ms] Jan 11 16:23:03.749: INFO: Created: latency-svc-kzd9p Jan 11 16:23:03.763: INFO: Got endpoints: latency-svc-kzd9p [930.048794ms] Jan 11 16:23:03.800: INFO: Created: latency-svc-gj6xx Jan 11 16:23:03.815: INFO: Got endpoints: latency-svc-gj6xx [951.878571ms] Jan 11 16:23:03.848: INFO: Created: latency-svc-kl8lg Jan 11 16:23:03.931: INFO: Got endpoints: latency-svc-kl8lg [1.045679123s] Jan 11 16:23:03.931: INFO: Created: latency-svc-9w87m Jan 11 16:23:03.940: INFO: Got endpoints: latency-svc-9w87m [1.029683793s] Jan 11 16:23:03.960: INFO: Created: latency-svc-cpvnz Jan 11 16:23:04.003: INFO: Got endpoints: latency-svc-cpvnz [1.037518848s] Jan 11 16:23:04.027: INFO: Created: latency-svc-5jwp8 Jan 11 16:23:04.068: INFO: Got endpoints: latency-svc-5jwp8 [1.062654113s] Jan 11 16:23:04.093: INFO: Created: latency-svc-87ttn Jan 11 16:23:04.107: INFO: Got endpoints: latency-svc-87ttn [961.19817ms] Jan 11 16:23:04.135: INFO: Created: latency-svc-6sdbl Jan 11 16:23:04.149: INFO: Got endpoints: latency-svc-6sdbl [957.378381ms] Jan 11 16:23:04.236: INFO: Created: latency-svc-cbn2b Jan 11 16:23:04.276: INFO: Created: latency-svc-mlczx Jan 11 16:23:04.276: INFO: Got endpoints: latency-svc-cbn2b [932.942625ms] Jan 11 16:23:04.298: INFO: Got endpoints: latency-svc-mlczx [939.305615ms] Jan 11 16:23:04.332: INFO: Created: latency-svc-6tkt2 Jan 11 16:23:04.373: INFO: Got endpoints: latency-svc-6tkt2 [977.08753ms] Jan 11 16:23:04.387: INFO: Created: latency-svc-k42sc Jan 11 16:23:04.402: INFO: Got endpoints: latency-svc-k42sc [925.786259ms] Jan 11 16:23:04.422: INFO: Created: latency-svc-xhdvz Jan 11 16:23:04.444: INFO: Got endpoints: latency-svc-xhdvz [931.999078ms] Jan 11 16:23:04.535: INFO: Created: latency-svc-wz6rd Jan 11 16:23:04.537: INFO: Created: latency-svc-jmrrz Jan 11 16:23:04.544: INFO: Got endpoints: latency-svc-wz6rd [931.050376ms] Jan 11 16:23:04.545: INFO: Got endpoints: latency-svc-jmrrz [864.811911ms] Jan 11 16:23:04.560: INFO: Created: latency-svc-fcfkb Jan 11 16:23:04.578: INFO: Got endpoints: latency-svc-fcfkb [814.515602ms] Jan 11 16:23:04.599: INFO: Created: latency-svc-v9rcp Jan 11 16:23:04.616: INFO: Got endpoints: latency-svc-v9rcp [800.245932ms] Jan 11 16:23:04.713: INFO: Created: latency-svc-gdw2l Jan 11 16:23:04.771: INFO: Created: latency-svc-pstlf Jan 11 16:23:04.771: INFO: Got endpoints: latency-svc-gdw2l [839.711371ms] Jan 11 16:23:04.804: INFO: Got endpoints: latency-svc-pstlf [863.716943ms] Jan 11 16:23:04.845: INFO: Created: latency-svc-7vm5r Jan 11 16:23:04.879: INFO: Got endpoints: latency-svc-7vm5r [875.703644ms] Jan 11 16:23:04.922: INFO: Created: latency-svc-q6c79 Jan 11 16:23:04.995: INFO: Got endpoints: latency-svc-q6c79 [926.128453ms] Jan 11 16:23:05.049: INFO: Created: latency-svc-q6nbs Jan 11 16:23:05.079: INFO: Got endpoints: latency-svc-q6nbs [971.527428ms] Jan 11 16:23:05.199: INFO: Created: latency-svc-hlcrv Jan 11 16:23:05.223: INFO: Got endpoints: latency-svc-hlcrv [1.073522369s] Jan 11 16:23:05.270: INFO: Created: latency-svc-7rbqq Jan 11 16:23:05.342: INFO: Got endpoints: latency-svc-7rbqq [1.065541212s] Jan 11 16:23:05.344: INFO: Created: latency-svc-s746m Jan 11 16:23:05.415: INFO: Got endpoints: latency-svc-s746m [1.116139782s] Jan 11 16:23:05.485: INFO: Created: latency-svc-krgwb Jan 11 16:23:05.515: INFO: Got endpoints: latency-svc-krgwb [1.141637596s] Jan 11 16:23:05.518: INFO: Created: latency-svc-ssmdp Jan 11 16:23:05.547: INFO: Got endpoints: latency-svc-ssmdp [1.144654377s] Jan 11 16:23:05.570: INFO: Created: latency-svc-wdw9k Jan 11 16:23:05.582: INFO: Got endpoints: latency-svc-wdw9k [1.137788033s] Jan 11 16:23:05.666: INFO: Created: latency-svc-tl7c6 Jan 11 16:23:05.688: INFO: Got endpoints: latency-svc-tl7c6 [1.144008018s] Jan 11 16:23:05.827: INFO: Created: latency-svc-k4phm Jan 11 16:23:05.853: INFO: Got endpoints: latency-svc-k4phm [1.308121837s] Jan 11 16:23:05.857: INFO: Created: latency-svc-45cc5 Jan 11 16:23:05.876: INFO: Got endpoints: latency-svc-45cc5 [1.298595798s] Jan 11 16:23:05.906: INFO: Created: latency-svc-gfnzj Jan 11 16:23:05.917: INFO: Got endpoints: latency-svc-gfnzj [1.300865921s] Jan 11 16:23:05.959: INFO: Created: latency-svc-wktsb Jan 11 16:23:05.966: INFO: Got endpoints: latency-svc-wktsb [1.194201224s] Jan 11 16:23:05.995: INFO: Created: latency-svc-7mxj9 Jan 11 16:23:06.019: INFO: Got endpoints: latency-svc-7mxj9 [1.215176111s] Jan 11 16:23:06.039: INFO: Created: latency-svc-2dl2q Jan 11 16:23:06.056: INFO: Got endpoints: latency-svc-2dl2q [1.17608821s] Jan 11 16:23:06.097: INFO: Created: latency-svc-4dk79 Jan 11 16:23:06.102: INFO: Got endpoints: latency-svc-4dk79 [1.106793396s] Jan 11 16:23:06.122: INFO: Created: latency-svc-hzkr5 Jan 11 16:23:06.153: INFO: Got endpoints: latency-svc-hzkr5 [1.073728064s] Jan 11 16:23:06.193: INFO: Created: latency-svc-4254b Jan 11 16:23:06.221: INFO: Got endpoints: latency-svc-4254b [997.935437ms] Jan 11 16:23:06.241: INFO: Created: latency-svc-klkg5 Jan 11 16:23:06.252: INFO: Got endpoints: latency-svc-klkg5 [909.065228ms] Jan 11 16:23:06.302: INFO: Created: latency-svc-4c57k Jan 11 16:23:06.317: INFO: Got endpoints: latency-svc-4c57k [901.580505ms] Jan 11 16:23:06.344: INFO: Created: latency-svc-l4s85 Jan 11 16:23:06.361: INFO: Got endpoints: latency-svc-l4s85 [846.018437ms] Jan 11 16:23:06.391: INFO: Created: latency-svc-flqd6 Jan 11 16:23:06.422: INFO: Got endpoints: latency-svc-flqd6 [875.309529ms] Jan 11 16:23:06.480: INFO: Created: latency-svc-txbtt Jan 11 16:23:06.487: INFO: Got endpoints: latency-svc-txbtt [904.162796ms] Jan 11 16:23:06.512: INFO: Created: latency-svc-m2nlm Jan 11 16:23:06.535: INFO: Got endpoints: latency-svc-m2nlm [846.642046ms] Jan 11 16:23:06.548: INFO: Created: latency-svc-85x87 Jan 11 16:23:06.559: INFO: Got endpoints: latency-svc-85x87 [705.825753ms] Jan 11 16:23:06.572: INFO: Created: latency-svc-wr6hx Jan 11 16:23:06.629: INFO: Got endpoints: latency-svc-wr6hx [752.13031ms] Jan 11 16:23:06.650: INFO: Created: latency-svc-v689p Jan 11 16:23:06.665: INFO: Got endpoints: latency-svc-v689p [747.837745ms] Jan 11 16:23:06.686: INFO: Created: latency-svc-lnbbj Jan 11 16:23:06.695: INFO: Got endpoints: latency-svc-lnbbj [729.013411ms] Jan 11 16:23:06.709: INFO: Created: latency-svc-srkvn Jan 11 16:23:06.718: INFO: Got endpoints: latency-svc-srkvn [698.650256ms] Jan 11 16:23:06.778: INFO: Created: latency-svc-m8nzj Jan 11 16:23:06.809: INFO: Got endpoints: latency-svc-m8nzj [752.854868ms] Jan 11 16:23:06.813: INFO: Created: latency-svc-r95qf Jan 11 16:23:06.842: INFO: Got endpoints: latency-svc-r95qf [739.925985ms] Jan 11 16:23:06.877: INFO: Created: latency-svc-rwxdr Jan 11 16:23:06.906: INFO: Got endpoints: latency-svc-rwxdr [752.628685ms] Jan 11 16:23:06.925: INFO: Created: latency-svc-rnmv4 Jan 11 16:23:06.935: INFO: Got endpoints: latency-svc-rnmv4 [713.175323ms] Jan 11 16:23:06.987: INFO: Created: latency-svc-4xjw6 Jan 11 16:23:07.001: INFO: Got endpoints: latency-svc-4xjw6 [749.557001ms] Jan 11 16:23:07.043: INFO: Created: latency-svc-hddzd Jan 11 16:23:07.071: INFO: Got endpoints: latency-svc-hddzd [753.991392ms] Jan 11 16:23:07.181: INFO: Created: latency-svc-nrgdp Jan 11 16:23:07.455: INFO: Got endpoints: latency-svc-nrgdp [1.09352816s] Jan 11 16:23:07.496: INFO: Created: latency-svc-nl4g8 Jan 11 16:23:07.511: INFO: Got endpoints: latency-svc-nl4g8 [1.088882148s] Jan 11 16:23:07.582: INFO: Created: latency-svc-d92rc Jan 11 16:23:07.621: INFO: Got endpoints: latency-svc-d92rc [1.134190343s] Jan 11 16:23:07.623: INFO: Created: latency-svc-hqnvd Jan 11 16:23:07.678: INFO: Got endpoints: latency-svc-hqnvd [1.142478765s] Jan 11 16:23:07.724: INFO: Created: latency-svc-hv45c Jan 11 16:23:07.754: INFO: Got endpoints: latency-svc-hv45c [1.194330374s] Jan 11 16:23:07.841: INFO: Created: latency-svc-srl8j Jan 11 16:23:07.856: INFO: Created: latency-svc-fdksp Jan 11 16:23:07.856: INFO: Got endpoints: latency-svc-srl8j [1.227429613s] Jan 11 16:23:07.881: INFO: Got endpoints: latency-svc-fdksp [1.215599453s] Jan 11 16:23:08.068: INFO: Created: latency-svc-m6cqp Jan 11 16:23:08.133: INFO: Created: latency-svc-dq7p5 Jan 11 16:23:08.133: INFO: Got endpoints: latency-svc-m6cqp [1.438347432s] Jan 11 16:23:08.162: INFO: Got endpoints: latency-svc-dq7p5 [1.443389711s] Jan 11 16:23:08.229: INFO: Created: latency-svc-bsh97 Jan 11 16:23:08.238: INFO: Got endpoints: latency-svc-bsh97 [1.428608184s] Jan 11 16:23:08.272: INFO: Created: latency-svc-fp6z4 Jan 11 16:23:08.299: INFO: Got endpoints: latency-svc-fp6z4 [1.457270009s] Jan 11 16:23:08.360: INFO: Created: latency-svc-sqsgr Jan 11 16:23:08.385: INFO: Got endpoints: latency-svc-sqsgr [1.478725264s] Jan 11 16:23:08.421: INFO: Created: latency-svc-crtsq Jan 11 16:23:08.451: INFO: Got endpoints: latency-svc-crtsq [1.516539745s] Jan 11 16:23:08.522: INFO: Created: latency-svc-vk5fk Jan 11 16:23:08.548: INFO: Got endpoints: latency-svc-vk5fk [1.546212167s] Jan 11 16:23:08.548: INFO: Created: latency-svc-jt6tv Jan 11 16:23:09.078: INFO: Got endpoints: latency-svc-jt6tv [2.006974958s] Jan 11 16:23:09.081: INFO: Created: latency-svc-7kct9 Jan 11 16:23:09.111: INFO: Got endpoints: latency-svc-7kct9 [1.655572052s] Jan 11 16:23:09.165: INFO: Created: latency-svc-kdhfl Jan 11 16:23:09.229: INFO: Got endpoints: latency-svc-kdhfl [1.716979671s] Jan 11 16:23:09.230: INFO: Latencies: [81.060935ms 123.120208ms 216.022096ms 284.647371ms 333.594082ms 438.223105ms 475.334857ms 572.091518ms 625.461865ms 640.005094ms 641.987619ms 656.579355ms 680.141707ms 698.650256ms 705.825753ms 707.212194ms 713.175323ms 726.220852ms 729.013411ms 731.073167ms 732.014069ms 739.925985ms 747.837745ms 749.557001ms 750.082683ms 752.13031ms 752.628685ms 752.854868ms 753.991392ms 765.734736ms 775.009384ms 784.816744ms 787.952578ms 794.655747ms 798.235214ms 800.245932ms 802.665814ms 809.905024ms 814.515602ms 831.781686ms 839.711371ms 840.265673ms 841.746245ms 843.781268ms 846.018437ms 846.642046ms 846.800992ms 849.501677ms 851.617971ms 852.478171ms 855.127901ms 856.989589ms 861.716682ms 863.716943ms 864.811911ms 875.309529ms 875.703644ms 876.032166ms 880.759155ms 887.615245ms 889.808183ms 891.677747ms 895.579104ms 901.580505ms 902.159091ms 904.162796ms 904.836238ms 907.695968ms 909.065228ms 910.533946ms 910.565723ms 912.204012ms 912.868285ms 912.989589ms 913.690539ms 921.777345ms 923.014191ms 925.786259ms 926.128453ms 928.354234ms 929.554861ms 930.048794ms 930.901773ms 931.050376ms 931.999078ms 932.942625ms 935.981606ms 936.588093ms 936.930576ms 939.305615ms 942.461791ms 944.473912ms 945.741369ms 945.797507ms 948.16783ms 951.878571ms 954.509641ms 957.378381ms 959.026812ms 961.19817ms 962.879512ms 965.758193ms 965.859007ms 967.69797ms 969.068308ms 971.527428ms 972.313994ms 975.957856ms 977.08753ms 981.963249ms 981.972499ms 982.545883ms 987.312299ms 988.222322ms 997.935437ms 1.00148216s 1.004100271s 1.006051326s 1.007001019s 1.011327206s 1.011517682s 1.012403552s 1.012432523s 1.017748832s 1.02846039s 1.029683793s 1.030193218s 1.031315455s 1.036635223s 1.037518848s 1.040987809s 1.045045698s 1.045679123s 1.055687749s 1.056062254s 1.061688349s 1.062654113s 1.065541212s 1.072570448s 1.073522369s 1.073728064s 1.082454696s 1.088882148s 1.09352816s 1.104257082s 1.106793396s 1.110858478s 1.116139782s 1.118041955s 1.134190343s 1.137788033s 1.139379034s 1.141637596s 1.142478765s 1.144008018s 1.144654377s 1.147655924s 1.149977971s 1.155860659s 1.174662985s 1.17608821s 1.194201224s 1.194330374s 1.215176111s 1.215599453s 1.227429613s 1.298595798s 1.300865921s 1.308121837s 1.334189406s 1.426782624s 1.428608184s 1.438347432s 1.443389711s 1.457270009s 1.478725264s 1.49255396s 1.515336726s 1.516539745s 1.537562052s 1.546212167s 1.655572052s 1.716979671s 1.75119495s 2.006974958s 2.024107964s 2.083211278s 2.359026709s 2.408809204s 2.433553825s 2.546813644s 2.927719609s 2.986409029s 3.0081861s 3.02480383s 3.025008383s 3.029549337s 3.039091017s 3.049508181s 3.099334413s] Jan 11 16:23:09.233: INFO: 50 %ile: 962.879512ms Jan 11 16:23:09.233: INFO: 90 %ile: 1.546212167s Jan 11 16:23:09.234: INFO: 99 %ile: 3.049508181s Jan 11 16:23:09.234: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:23:09.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9768" for this suite. • [SLOW TEST:20.268 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":309,"completed":32,"skipped":752,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:23:09.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0111 16:23:10.477732 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 16:24:12.504: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:24:12.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8942" for this suite. • [SLOW TEST:63.262 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":309,"completed":33,"skipped":757,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:24:12.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:24:12.674: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 11 16:24:12.684: INFO: Number of nodes with available pods: 0 Jan 11 16:24:12.684: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 11 16:24:12.775: INFO: Number of nodes with available pods: 0 Jan 11 16:24:12.775: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:13.783: INFO: Number of nodes with available pods: 0 Jan 11 16:24:13.783: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:14.913: INFO: Number of nodes with available pods: 0 Jan 11 16:24:14.913: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:15.782: INFO: Number of nodes with available pods: 0 Jan 11 16:24:15.782: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:16.782: INFO: Number of nodes with available pods: 1 Jan 11 16:24:16.782: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 11 16:24:16.825: INFO: Number of nodes with available pods: 1 Jan 11 16:24:16.825: INFO: Number of running nodes: 0, number of available pods: 1 Jan 11 16:24:17.832: INFO: Number of nodes with available pods: 0 Jan 11 16:24:17.832: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 11 16:24:17.877: INFO: Number of nodes with available pods: 0 Jan 11 16:24:17.877: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:18.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:18.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:19.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:19.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:20.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:20.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:21.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:21.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:22.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:22.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:23.901: INFO: Number of nodes with available pods: 0 Jan 11 16:24:23.901: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:24.887: INFO: Number of nodes with available pods: 0 Jan 11 16:24:24.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:25.884: INFO: Number of nodes with available pods: 0 Jan 11 16:24:25.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:26.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:26.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:27.887: INFO: Number of nodes with available pods: 0 Jan 11 16:24:27.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:28.900: INFO: Number of nodes with available pods: 0 Jan 11 16:24:28.900: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:29.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:29.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:30.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:30.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:31.883: INFO: Number of nodes with available pods: 0 Jan 11 16:24:31.884: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:32.884: INFO: Number of nodes with available pods: 0 Jan 11 16:24:32.884: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:33.894: INFO: Number of nodes with available pods: 0 Jan 11 16:24:33.894: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:34.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:34.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:35.899: INFO: Number of nodes with available pods: 0 Jan 11 16:24:35.900: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:36.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:36.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:37.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:37.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:38.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:38.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:39.888: INFO: Number of nodes with available pods: 0 Jan 11 16:24:39.888: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:40.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:40.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:41.884: INFO: Number of nodes with available pods: 0 Jan 11 16:24:41.884: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:42.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:42.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:43.884: INFO: Number of nodes with available pods: 0 Jan 11 16:24:43.884: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:44.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:44.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:45.884: INFO: Number of nodes with available pods: 0 Jan 11 16:24:45.884: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:46.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:46.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:47.913: INFO: Number of nodes with available pods: 0 Jan 11 16:24:47.913: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:48.891: INFO: Number of nodes with available pods: 0 Jan 11 16:24:48.891: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:49.887: INFO: Number of nodes with available pods: 0 Jan 11 16:24:49.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:50.887: INFO: Number of nodes with available pods: 0 Jan 11 16:24:50.888: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:51.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:51.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:52.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:52.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:53.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:53.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:54.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:54.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:55.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:55.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:56.884: INFO: Number of nodes with available pods: 0 Jan 11 16:24:56.884: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:57.885: INFO: Number of nodes with available pods: 0 Jan 11 16:24:57.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:58.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:58.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:24:59.886: INFO: Number of nodes with available pods: 0 Jan 11 16:24:59.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:00.886: INFO: Number of nodes with available pods: 0 Jan 11 16:25:00.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:01.883: INFO: Number of nodes with available pods: 0 Jan 11 16:25:01.883: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:02.886: INFO: Number of nodes with available pods: 0 Jan 11 16:25:02.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:03.885: INFO: Number of nodes with available pods: 0 Jan 11 16:25:03.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:04.885: INFO: Number of nodes with available pods: 0 Jan 11 16:25:04.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:05.886: INFO: Number of nodes with available pods: 0 Jan 11 16:25:05.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:06.887: INFO: Number of nodes with available pods: 0 Jan 11 16:25:06.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:07.886: INFO: Number of nodes with available pods: 0 Jan 11 16:25:07.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:08.886: INFO: Number of nodes with available pods: 0 Jan 11 16:25:08.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:09.887: INFO: Number of nodes with available pods: 0 Jan 11 16:25:09.887: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:10.886: INFO: Number of nodes with available pods: 0 Jan 11 16:25:10.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:11.885: INFO: Number of nodes with available pods: 0 Jan 11 16:25:11.886: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:12.884: INFO: Number of nodes with available pods: 0 Jan 11 16:25:12.885: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:13.885: INFO: Number of nodes with available pods: 1 Jan 11 16:25:13.885: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5589, will wait for the garbage collector to delete the pods Jan 11 16:25:14.001: INFO: Deleting DaemonSet.extensions daemon-set took: 48.246037ms Jan 11 16:25:14.602: INFO: Terminating DaemonSet.extensions daemon-set pods took: 601.22341ms Jan 11 16:25:20.209: INFO: Number of nodes with available pods: 0 Jan 11 16:25:20.210: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 16:25:20.215: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"190658"},"items":null} Jan 11 16:25:20.219: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"190658"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:25:20.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5589" for this suite. • [SLOW TEST:67.753 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":309,"completed":34,"skipped":758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:25:20.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-be193a8b-9dcb-413c-800a-0a6fbe4e7188 STEP: Creating a pod to test consume configMaps Jan 11 16:25:20.436: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a4e4cdf2-0af9-4723-a54b-c26281e26b47" in namespace "projected-4852" to be "Succeeded or Failed" Jan 11 16:25:20.441: INFO: Pod "pod-projected-configmaps-a4e4cdf2-0af9-4723-a54b-c26281e26b47": Phase="Pending", Reason="", readiness=false. Elapsed: 5.168988ms Jan 11 16:25:22.449: INFO: Pod "pod-projected-configmaps-a4e4cdf2-0af9-4723-a54b-c26281e26b47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013409646s Jan 11 16:25:24.457: INFO: Pod "pod-projected-configmaps-a4e4cdf2-0af9-4723-a54b-c26281e26b47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020744349s STEP: Saw pod success Jan 11 16:25:24.457: INFO: Pod "pod-projected-configmaps-a4e4cdf2-0af9-4723-a54b-c26281e26b47" satisfied condition "Succeeded or Failed" Jan 11 16:25:24.462: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-a4e4cdf2-0af9-4723-a54b-c26281e26b47 container agnhost-container: STEP: delete the pod Jan 11 16:25:24.548: INFO: Waiting for pod pod-projected-configmaps-a4e4cdf2-0af9-4723-a54b-c26281e26b47 to disappear Jan 11 16:25:24.559: INFO: Pod pod-projected-configmaps-a4e4cdf2-0af9-4723-a54b-c26281e26b47 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:25:24.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4852" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":35,"skipped":815,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:25:24.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:25:24.865: INFO: Create a RollingUpdate DaemonSet Jan 11 16:25:24.872: INFO: Check that daemon pods launch on every node of the cluster Jan 11 16:25:24.914: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:24.976: INFO: Number of nodes with available pods: 0 Jan 11 16:25:24.976: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:26.030: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:26.037: INFO: Number of nodes with available pods: 0 Jan 11 16:25:26.038: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:26.990: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:26.997: INFO: Number of nodes with available pods: 0 Jan 11 16:25:26.997: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:27.989: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:27.996: INFO: Number of nodes with available pods: 0 Jan 11 16:25:27.996: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:28.990: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:28.998: INFO: Number of nodes with available pods: 1 Jan 11 16:25:28.998: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:25:29.990: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:29.996: INFO: Number of nodes with available pods: 2 Jan 11 16:25:29.996: INFO: Number of running nodes: 2, number of available pods: 2 Jan 11 16:25:29.996: INFO: Update the DaemonSet to trigger a rollout Jan 11 16:25:30.008: INFO: Updating DaemonSet daemon-set Jan 11 16:25:40.058: INFO: Roll back the DaemonSet before rollout is complete Jan 11 16:25:40.071: INFO: Updating DaemonSet daemon-set Jan 11 16:25:40.071: INFO: Make sure DaemonSet rollback is complete Jan 11 16:25:40.083: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:40.083: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:40.135: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:41.144: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:41.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:41.153: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:42.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:42.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:42.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:43.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:43.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:43.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:44.144: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:44.144: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:44.152: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:45.148: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:45.148: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:45.158: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:46.143: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:46.143: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:46.151: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:47.147: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:47.147: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:47.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:48.157: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:48.157: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:48.166: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:49.144: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:49.144: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:49.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:50.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:50.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:50.153: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:51.150: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:51.150: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:51.161: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:52.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:52.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:52.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:53.149: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:53.149: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:53.158: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:54.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:54.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:54.153: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:55.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:55.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:55.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:56.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:56.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:56.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:57.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:57.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:57.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:58.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:58.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:58.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:25:59.147: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:25:59.147: INFO: Pod daemon-set-pcjws is not available Jan 11 16:25:59.158: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:00.144: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:00.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:00.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:01.150: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:01.150: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:01.163: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:02.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:02.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:02.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:03.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:03.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:03.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:04.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:04.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:04.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:05.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:05.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:05.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:06.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:06.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:06.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:07.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:07.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:07.158: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:08.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:08.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:08.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:09.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:09.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:09.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:10.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:10.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:10.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:11.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:11.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:11.158: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:12.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:12.147: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:12.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:13.145: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:13.145: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:13.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:14.147: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:14.147: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:14.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:15.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:15.147: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:15.158: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:16.144: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:16.144: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:16.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:17.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:17.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:17.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:18.146: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:18.146: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:18.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:19.147: INFO: Wrong image for pod: daemon-set-pcjws. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 16:26:19.147: INFO: Pod daemon-set-pcjws is not available Jan 11 16:26:19.159: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:26:20.145: INFO: Pod daemon-set-j52n9 is not available Jan 11 16:26:20.153: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5342, will wait for the garbage collector to delete the pods Jan 11 16:26:20.225: INFO: Deleting DaemonSet.extensions daemon-set took: 6.042099ms Jan 11 16:26:20.826: INFO: Terminating DaemonSet.extensions daemon-set pods took: 601.190373ms Jan 11 16:27:19.934: INFO: Number of nodes with available pods: 0 Jan 11 16:27:19.935: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 16:27:19.940: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"191035"},"items":null} Jan 11 16:27:19.964: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"191035"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:27:19.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5342" for this suite. • [SLOW TEST:115.420 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":309,"completed":36,"skipped":839,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:27:20.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:27:20.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2597" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":309,"completed":37,"skipped":859,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:27:20.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7073.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7073.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7073.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 16:27:26.434: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:26.439: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:26.443: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:26.446: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:26.466: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:26.470: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:26.474: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:26.478: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:26.485: INFO: Lookups using dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local] Jan 11 16:27:31.493: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:31.497: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:31.501: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:31.505: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:31.516: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:31.520: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:31.524: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:31.528: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:31.537: INFO: Lookups using dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local] Jan 11 16:27:36.493: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:36.499: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:36.504: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:36.509: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:36.526: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:36.530: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:36.533: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:36.537: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:36.544: INFO: Lookups using dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local] Jan 11 16:27:41.493: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:41.498: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:41.503: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:41.507: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:41.522: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:41.526: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:41.531: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:41.536: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:41.545: INFO: Lookups using dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local] Jan 11 16:27:46.492: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:46.496: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:46.500: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:46.505: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:46.515: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:46.519: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:46.526: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:46.531: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:46.566: INFO: Lookups using dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local] Jan 11 16:27:51.493: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:51.499: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:51.503: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:51.508: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:51.521: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:51.525: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:51.530: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:51.535: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local from pod dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d: the server could not find the requested resource (get pods dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d) Jan 11 16:27:51.543: INFO: Lookups using dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7073.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7073.svc.cluster.local jessie_udp@dns-test-service-2.dns-7073.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7073.svc.cluster.local] Jan 11 16:27:56.545: INFO: DNS probes using dns-7073/dns-test-6270abb5-9352-47c8-a12a-4a180fc8b33d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:27:57.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7073" for this suite. • [SLOW TEST:36.961 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":309,"completed":38,"skipped":866,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:27:57.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-3881 Jan 11 16:28:01.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3881 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 11 16:28:02.844: INFO: stderr: "I0111 16:28:02.676713 82 log.go:181] (0x24cc070) (0x24cc0e0) Create stream\nI0111 16:28:02.680871 82 log.go:181] (0x24cc070) (0x24cc0e0) Stream added, broadcasting: 1\nI0111 16:28:02.696673 82 log.go:181] (0x24cc070) Reply frame received for 1\nI0111 16:28:02.698124 82 log.go:181] (0x24cc070) (0x24cc230) Create stream\nI0111 16:28:02.698338 82 log.go:181] (0x24cc070) (0x24cc230) Stream added, broadcasting: 3\nI0111 16:28:02.700992 82 log.go:181] (0x24cc070) Reply frame received for 3\nI0111 16:28:02.701622 82 log.go:181] (0x24cc070) (0x27af880) Create stream\nI0111 16:28:02.701765 82 log.go:181] (0x24cc070) (0x27af880) Stream added, broadcasting: 5\nI0111 16:28:02.704020 82 log.go:181] (0x24cc070) Reply frame received for 5\nI0111 16:28:02.774098 82 log.go:181] (0x24cc070) Data frame received for 5\nI0111 16:28:02.774362 82 log.go:181] (0x27af880) (5) Data frame handling\nI0111 16:28:02.774848 82 log.go:181] (0x27af880) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0111 16:28:02.826714 82 log.go:181] (0x24cc070) Data frame received for 5\nI0111 16:28:02.826928 82 log.go:181] (0x27af880) (5) Data frame handling\nI0111 16:28:02.827260 82 log.go:181] (0x24cc070) Data frame received for 3\nI0111 16:28:02.827518 82 log.go:181] (0x24cc230) (3) Data frame handling\nI0111 16:28:02.827772 82 log.go:181] (0x24cc230) (3) Data frame sent\nI0111 16:28:02.827973 82 log.go:181] (0x24cc070) Data frame received for 3\nI0111 16:28:02.828145 82 log.go:181] (0x24cc230) (3) Data frame handling\nI0111 16:28:02.829469 82 log.go:181] (0x24cc070) Data frame received for 1\nI0111 16:28:02.829605 82 log.go:181] (0x24cc0e0) (1) Data frame handling\nI0111 16:28:02.829769 82 log.go:181] (0x24cc0e0) (1) Data frame sent\nI0111 16:28:02.830586 82 log.go:181] (0x24cc070) (0x24cc0e0) Stream removed, broadcasting: 1\nI0111 16:28:02.833108 82 log.go:181] (0x24cc070) Go away received\nI0111 16:28:02.835477 82 log.go:181] (0x24cc070) (0x24cc0e0) Stream removed, broadcasting: 1\nI0111 16:28:02.835697 82 log.go:181] (0x24cc070) (0x24cc230) Stream removed, broadcasting: 3\nI0111 16:28:02.835861 82 log.go:181] (0x24cc070) (0x27af880) Stream removed, broadcasting: 5\n" Jan 11 16:28:02.845: INFO: stdout: "iptables" Jan 11 16:28:02.845: INFO: proxyMode: iptables Jan 11 16:28:02.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 11 16:28:02.897: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3881 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3881 I0111 16:28:02.941036 10 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3881, replica count: 3 I0111 16:28:05.992554 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:28:08.993431 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:28:11.994543 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 16:28:12.005: INFO: Creating new exec pod Jan 11 16:28:17.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3881 exec execpod-affinity9r7pw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 11 16:28:18.473: INFO: stderr: "I0111 16:28:18.360278 102 log.go:181] (0x274a380) (0x274a460) Create stream\nI0111 16:28:18.363792 102 log.go:181] (0x274a380) (0x274a460) Stream added, broadcasting: 1\nI0111 16:28:18.374249 102 log.go:181] (0x274a380) Reply frame received for 1\nI0111 16:28:18.374664 102 log.go:181] (0x274a380) (0x269e230) Create stream\nI0111 16:28:18.374726 102 log.go:181] (0x274a380) (0x269e230) Stream added, broadcasting: 3\nI0111 16:28:18.376310 102 log.go:181] (0x274a380) Reply frame received for 3\nI0111 16:28:18.376520 102 log.go:181] (0x274a380) (0x269e3f0) Create stream\nI0111 16:28:18.376583 102 log.go:181] (0x274a380) (0x269e3f0) Stream added, broadcasting: 5\nI0111 16:28:18.377903 102 log.go:181] (0x274a380) Reply frame received for 5\nI0111 16:28:18.436347 102 log.go:181] (0x274a380) Data frame received for 5\nI0111 16:28:18.436553 102 log.go:181] (0x269e3f0) (5) Data frame handling\nI0111 16:28:18.436830 102 log.go:181] (0x269e3f0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0111 16:28:18.445486 102 log.go:181] (0x274a380) Data frame received for 5\nI0111 16:28:18.445692 102 log.go:181] (0x269e3f0) (5) Data frame handling\nI0111 16:28:18.445916 102 log.go:181] (0x269e3f0) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0111 16:28:18.446099 102 log.go:181] (0x274a380) Data frame received for 5\nI0111 16:28:18.446299 102 log.go:181] (0x269e3f0) (5) Data frame handling\nI0111 16:28:18.446546 102 log.go:181] (0x274a380) Data frame received for 3\nI0111 16:28:18.446758 102 log.go:181] (0x269e230) (3) Data frame handling\nI0111 16:28:18.447501 102 log.go:181] (0x274a380) Data frame received for 1\nI0111 16:28:18.447624 102 log.go:181] (0x274a460) (1) Data frame handling\nI0111 16:28:18.447760 102 log.go:181] (0x274a460) (1) Data frame sent\nI0111 16:28:18.448543 102 log.go:181] (0x274a380) (0x274a460) Stream removed, broadcasting: 1\nI0111 16:28:18.451382 102 log.go:181] (0x274a380) Go away received\nI0111 16:28:18.464072 102 log.go:181] (0x274a380) (0x274a460) Stream removed, broadcasting: 1\nI0111 16:28:18.464614 102 log.go:181] (0x274a380) (0x269e230) Stream removed, broadcasting: 3\nI0111 16:28:18.464947 102 log.go:181] (0x274a380) (0x269e3f0) Stream removed, broadcasting: 5\n" Jan 11 16:28:18.474: INFO: stdout: "" Jan 11 16:28:18.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3881 exec execpod-affinity9r7pw -- /bin/sh -x -c nc -zv -t -w 2 10.96.31.227 80' Jan 11 16:28:19.942: INFO: stderr: "I0111 16:28:19.806184 123 log.go:181] (0x276a000) (0x276a1c0) Create stream\nI0111 16:28:19.809394 123 log.go:181] (0x276a000) (0x276a1c0) Stream added, broadcasting: 1\nI0111 16:28:19.827854 123 log.go:181] (0x276a000) Reply frame received for 1\nI0111 16:28:19.828256 123 log.go:181] (0x276a000) (0x276a310) Create stream\nI0111 16:28:19.828316 123 log.go:181] (0x276a000) (0x276a310) Stream added, broadcasting: 3\nI0111 16:28:19.829710 123 log.go:181] (0x276a000) Reply frame received for 3\nI0111 16:28:19.829960 123 log.go:181] (0x276a000) (0x2e1e150) Create stream\nI0111 16:28:19.830029 123 log.go:181] (0x276a000) (0x2e1e150) Stream added, broadcasting: 5\nI0111 16:28:19.831206 123 log.go:181] (0x276a000) Reply frame received for 5\nI0111 16:28:19.926238 123 log.go:181] (0x276a000) Data frame received for 5\nI0111 16:28:19.926532 123 log.go:181] (0x2e1e150) (5) Data frame handling\nI0111 16:28:19.926756 123 log.go:181] (0x276a000) Data frame received for 3\nI0111 16:28:19.926962 123 log.go:181] (0x276a310) (3) Data frame handling\nI0111 16:28:19.927176 123 log.go:181] (0x276a000) Data frame received for 1\nI0111 16:28:19.927335 123 log.go:181] (0x276a1c0) (1) Data frame handling\nI0111 16:28:19.927473 123 log.go:181] (0x276a1c0) (1) Data frame sent\n+ nc -zv -t -w 2 10.96.31.227 80\nConnection to 10.96.31.227 80 port [tcp/http] succeeded!\nI0111 16:28:19.928091 123 log.go:181] (0x2e1e150) (5) Data frame sent\nI0111 16:28:19.928212 123 log.go:181] (0x276a000) Data frame received for 5\nI0111 16:28:19.928311 123 log.go:181] (0x2e1e150) (5) Data frame handling\nI0111 16:28:19.929606 123 log.go:181] (0x276a000) (0x276a1c0) Stream removed, broadcasting: 1\nI0111 16:28:19.931287 123 log.go:181] (0x276a000) Go away received\nI0111 16:28:19.933668 123 log.go:181] (0x276a000) (0x276a1c0) Stream removed, broadcasting: 1\nI0111 16:28:19.933859 123 log.go:181] (0x276a000) (0x276a310) Stream removed, broadcasting: 3\nI0111 16:28:19.934031 123 log.go:181] (0x276a000) (0x2e1e150) Stream removed, broadcasting: 5\n" Jan 11 16:28:19.943: INFO: stdout: "" Jan 11 16:28:19.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3881 exec execpod-affinity9r7pw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.31.227:80/ ; done' Jan 11 16:28:21.497: INFO: stderr: "I0111 16:28:21.289572 143 log.go:181] (0x27dee70) (0x27df960) Create stream\nI0111 16:28:21.294498 143 log.go:181] (0x27dee70) (0x27df960) Stream added, broadcasting: 1\nI0111 16:28:21.313752 143 log.go:181] (0x27dee70) Reply frame received for 1\nI0111 16:28:21.314222 143 log.go:181] (0x27dee70) (0x297e0e0) Create stream\nI0111 16:28:21.314291 143 log.go:181] (0x27dee70) (0x297e0e0) Stream added, broadcasting: 3\nI0111 16:28:21.315414 143 log.go:181] (0x27dee70) Reply frame received for 3\nI0111 16:28:21.315650 143 log.go:181] (0x27dee70) (0x27dfb20) Create stream\nI0111 16:28:21.315717 143 log.go:181] (0x27dee70) (0x27dfb20) Stream added, broadcasting: 5\nI0111 16:28:21.316755 143 log.go:181] (0x27dee70) Reply frame received for 5\nI0111 16:28:21.388007 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.388362 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.388530 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.388640 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.389497 143 log.go:181] (0x297e0e0) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.390177 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.391788 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.391971 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.392186 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.392567 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.392660 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.392754 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.392828 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.392992 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.393146 143 log.go:181] (0x27dfb20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/I0111 16:28:21.393252 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.393341 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.393453 143 log.go:181] (0x27dfb20) (5) Data frame sent\n\nI0111 16:28:21.396611 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.396713 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.396818 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.397249 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.397318 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.397377 143 log.go:181] (0x27dfb20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.397436 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.397485 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.397546 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.401634 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.401782 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.401933 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.402206 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.402343 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.402443 143 log.go:181] (0x27dfb20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.402534 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.402613 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.402708 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.407158 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.407218 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.407275 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.408471 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.408589 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.408703 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.409034 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.409202 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.409342 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.411766 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.411864 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.411989 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.412573 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.412697 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.412773 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.412913 143 log.go:181] (0x297e0e0) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.413004 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.413083 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.416432 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.416528 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.416628 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.418016 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.418177 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.418316 143 log.go:181] (0x27dfb20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0111 16:28:21.418433 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.418542 143 log.go:181] (0x27dfb20) (5) Data frame handling\n 2 http://10.96.31.227:80/\nI0111 16:28:21.418644 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.418760 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.418875 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.418982 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.420652 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.420759 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.420891 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.421456 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.421531 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.421600 143 log.go:181] (0x27dfb20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.421855 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.422000 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.422211 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.425839 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.425972 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.426088 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.426701 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.426813 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.426927 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.427053 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.427162 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.427281 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.431263 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.431424 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.431618 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.432167 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.432326 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.432485 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.432656 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.432788 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.433018 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.438688 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.438811 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.438961 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.439265 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.439427 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.439543 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.439677 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.439784 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.439960 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.444727 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.444923 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.445122 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.445819 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.445982 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.446164 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.446310 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.446440 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.446590 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.450511 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.450633 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.450754 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.451461 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.451666 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.451812 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.452001 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.452166 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.452312 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.456529 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.456642 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.456764 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.457691 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.457897 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.458057 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.458208 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.458334 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.458502 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.463755 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.463882 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.464020 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.464794 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.465008 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.465184 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.465333 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.465446 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.465558 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.469151 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.469252 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.469352 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.470294 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.470440 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.470651 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.470862 143 log.go:181] (0x27dfb20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:21.471008 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.471147 143 log.go:181] (0x27dfb20) (5) Data frame sent\nI0111 16:28:21.475856 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.475972 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.476106 143 log.go:181] (0x297e0e0) (3) Data frame sent\nI0111 16:28:21.477095 143 log.go:181] (0x27dee70) Data frame received for 5\nI0111 16:28:21.477290 143 log.go:181] (0x27dfb20) (5) Data frame handling\nI0111 16:28:21.477435 143 log.go:181] (0x27dee70) Data frame received for 3\nI0111 16:28:21.477610 143 log.go:181] (0x297e0e0) (3) Data frame handling\nI0111 16:28:21.479363 143 log.go:181] (0x27dee70) Data frame received for 1\nI0111 16:28:21.479511 143 log.go:181] (0x27df960) (1) Data frame handling\nI0111 16:28:21.479702 143 log.go:181] (0x27df960) (1) Data frame sent\nI0111 16:28:21.482137 143 log.go:181] (0x27dee70) (0x27df960) Stream removed, broadcasting: 1\nI0111 16:28:21.482970 143 log.go:181] (0x27dee70) Go away received\nI0111 16:28:21.485914 143 log.go:181] (0x27dee70) (0x27df960) Stream removed, broadcasting: 1\nI0111 16:28:21.486174 143 log.go:181] (0x27dee70) (0x297e0e0) Stream removed, broadcasting: 3\nI0111 16:28:21.486359 143 log.go:181] (0x27dee70) (0x27dfb20) Stream removed, broadcasting: 5\n" Jan 11 16:28:21.502: INFO: stdout: "\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6\naffinity-clusterip-timeout-246f6" Jan 11 16:28:21.502: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.502: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.502: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.503: INFO: Received response from host: affinity-clusterip-timeout-246f6 Jan 11 16:28:21.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3881 exec execpod-affinity9r7pw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.31.227:80/' Jan 11 16:28:22.891: INFO: stderr: "I0111 16:28:22.781796 164 log.go:181] (0x2736150) (0x2736380) Create stream\nI0111 16:28:22.785025 164 log.go:181] (0x2736150) (0x2736380) Stream added, broadcasting: 1\nI0111 16:28:22.802270 164 log.go:181] (0x2736150) Reply frame received for 1\nI0111 16:28:22.802731 164 log.go:181] (0x2736150) (0x30a8070) Create stream\nI0111 16:28:22.802799 164 log.go:181] (0x2736150) (0x30a8070) Stream added, broadcasting: 3\nI0111 16:28:22.804081 164 log.go:181] (0x2736150) Reply frame received for 3\nI0111 16:28:22.804293 164 log.go:181] (0x2736150) (0x2736540) Create stream\nI0111 16:28:22.804356 164 log.go:181] (0x2736150) (0x2736540) Stream added, broadcasting: 5\nI0111 16:28:22.805516 164 log.go:181] (0x2736150) Reply frame received for 5\nI0111 16:28:22.871802 164 log.go:181] (0x2736150) Data frame received for 5\nI0111 16:28:22.872039 164 log.go:181] (0x2736540) (5) Data frame handling\nI0111 16:28:22.872423 164 log.go:181] (0x2736540) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:22.874169 164 log.go:181] (0x2736150) Data frame received for 3\nI0111 16:28:22.874272 164 log.go:181] (0x30a8070) (3) Data frame handling\nI0111 16:28:22.874390 164 log.go:181] (0x30a8070) (3) Data frame sent\nI0111 16:28:22.875310 164 log.go:181] (0x2736150) Data frame received for 5\nI0111 16:28:22.875471 164 log.go:181] (0x2736540) (5) Data frame handling\nI0111 16:28:22.875755 164 log.go:181] (0x2736150) Data frame received for 3\nI0111 16:28:22.875876 164 log.go:181] (0x30a8070) (3) Data frame handling\nI0111 16:28:22.877040 164 log.go:181] (0x2736150) Data frame received for 1\nI0111 16:28:22.877192 164 log.go:181] (0x2736380) (1) Data frame handling\nI0111 16:28:22.877352 164 log.go:181] (0x2736380) (1) Data frame sent\nI0111 16:28:22.878054 164 log.go:181] (0x2736150) (0x2736380) Stream removed, broadcasting: 1\nI0111 16:28:22.879973 164 log.go:181] (0x2736150) Go away received\nI0111 16:28:22.882818 164 log.go:181] (0x2736150) (0x2736380) Stream removed, broadcasting: 1\nI0111 16:28:22.883043 164 log.go:181] (0x2736150) (0x30a8070) Stream removed, broadcasting: 3\nI0111 16:28:22.883218 164 log.go:181] (0x2736150) (0x2736540) Stream removed, broadcasting: 5\n" Jan 11 16:28:22.892: INFO: stdout: "affinity-clusterip-timeout-246f6" Jan 11 16:28:42.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3881 exec execpod-affinity9r7pw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.31.227:80/' Jan 11 16:28:44.351: INFO: stderr: "I0111 16:28:44.225877 184 log.go:181] (0x28ea000) (0x28ea070) Create stream\nI0111 16:28:44.228576 184 log.go:181] (0x28ea000) (0x28ea070) Stream added, broadcasting: 1\nI0111 16:28:44.248581 184 log.go:181] (0x28ea000) Reply frame received for 1\nI0111 16:28:44.249108 184 log.go:181] (0x28ea000) (0x2baa150) Create stream\nI0111 16:28:44.249176 184 log.go:181] (0x28ea000) (0x2baa150) Stream added, broadcasting: 3\nI0111 16:28:44.250627 184 log.go:181] (0x28ea000) Reply frame received for 3\nI0111 16:28:44.250891 184 log.go:181] (0x28ea000) (0x29240e0) Create stream\nI0111 16:28:44.250955 184 log.go:181] (0x28ea000) (0x29240e0) Stream added, broadcasting: 5\nI0111 16:28:44.251955 184 log.go:181] (0x28ea000) Reply frame received for 5\nI0111 16:28:44.331271 184 log.go:181] (0x28ea000) Data frame received for 5\nI0111 16:28:44.331483 184 log.go:181] (0x29240e0) (5) Data frame handling\nI0111 16:28:44.331860 184 log.go:181] (0x29240e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.31.227:80/\nI0111 16:28:44.332802 184 log.go:181] (0x28ea000) Data frame received for 5\nI0111 16:28:44.332944 184 log.go:181] (0x29240e0) (5) Data frame handling\nI0111 16:28:44.333346 184 log.go:181] (0x28ea000) Data frame received for 3\nI0111 16:28:44.333558 184 log.go:181] (0x2baa150) (3) Data frame handling\nI0111 16:28:44.333732 184 log.go:181] (0x2baa150) (3) Data frame sent\nI0111 16:28:44.333870 184 log.go:181] (0x28ea000) Data frame received for 3\nI0111 16:28:44.334099 184 log.go:181] (0x2baa150) (3) Data frame handling\nI0111 16:28:44.335101 184 log.go:181] (0x28ea000) Data frame received for 1\nI0111 16:28:44.335201 184 log.go:181] (0x28ea070) (1) Data frame handling\nI0111 16:28:44.335284 184 log.go:181] (0x28ea070) (1) Data frame sent\nI0111 16:28:44.335989 184 log.go:181] (0x28ea000) (0x28ea070) Stream removed, broadcasting: 1\nI0111 16:28:44.339102 184 log.go:181] (0x28ea000) Go away received\nI0111 16:28:44.341615 184 log.go:181] (0x28ea000) (0x28ea070) Stream removed, broadcasting: 1\nI0111 16:28:44.341883 184 log.go:181] (0x28ea000) (0x2baa150) Stream removed, broadcasting: 3\nI0111 16:28:44.342105 184 log.go:181] (0x28ea000) (0x29240e0) Stream removed, broadcasting: 5\n" Jan 11 16:28:44.352: INFO: stdout: "affinity-clusterip-timeout-s8nmr" Jan 11 16:28:44.352: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3881, will wait for the garbage collector to delete the pods Jan 11 16:28:44.481: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 7.194383ms Jan 11 16:28:45.182: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 701.071376ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:29:19.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3881" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:82.782 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":39,"skipped":888,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:29:19.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 11 16:29:20.032: INFO: Waiting up to 5m0s for pod "pod-771e7549-cf07-44e6-9901-93a17148474a" in namespace "emptydir-288" to be "Succeeded or Failed" Jan 11 16:29:20.043: INFO: Pod "pod-771e7549-cf07-44e6-9901-93a17148474a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.515074ms Jan 11 16:29:22.051: INFO: Pod "pod-771e7549-cf07-44e6-9901-93a17148474a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018622177s Jan 11 16:29:24.068: INFO: Pod "pod-771e7549-cf07-44e6-9901-93a17148474a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035663101s STEP: Saw pod success Jan 11 16:29:24.068: INFO: Pod "pod-771e7549-cf07-44e6-9901-93a17148474a" satisfied condition "Succeeded or Failed" Jan 11 16:29:24.073: INFO: Trying to get logs from node leguer-worker pod pod-771e7549-cf07-44e6-9901-93a17148474a container test-container: STEP: delete the pod Jan 11 16:29:24.110: INFO: Waiting for pod pod-771e7549-cf07-44e6-9901-93a17148474a to disappear Jan 11 16:29:24.120: INFO: Pod pod-771e7549-cf07-44e6-9901-93a17148474a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:29:24.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-288" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":40,"skipped":892,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:29:24.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 11 16:29:24.244: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:29:39.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-685" for this suite. • [SLOW TEST:15.697 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":309,"completed":41,"skipped":912,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:29:39.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-8796bc22-2360-47c4-a817-3d39b327527b STEP: Creating a pod to test consume configMaps Jan 11 16:29:39.962: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec182e29-0d02-4efa-b443-1347ac44556f" in namespace "configmap-4925" to be "Succeeded or Failed" Jan 11 16:29:39.987: INFO: Pod "pod-configmaps-ec182e29-0d02-4efa-b443-1347ac44556f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.469859ms Jan 11 16:29:42.051: INFO: Pod "pod-configmaps-ec182e29-0d02-4efa-b443-1347ac44556f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088806185s Jan 11 16:29:44.059: INFO: Pod "pod-configmaps-ec182e29-0d02-4efa-b443-1347ac44556f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096279517s STEP: Saw pod success Jan 11 16:29:44.059: INFO: Pod "pod-configmaps-ec182e29-0d02-4efa-b443-1347ac44556f" satisfied condition "Succeeded or Failed" Jan 11 16:29:44.086: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-ec182e29-0d02-4efa-b443-1347ac44556f container agnhost-container: STEP: delete the pod Jan 11 16:29:44.121: INFO: Waiting for pod pod-configmaps-ec182e29-0d02-4efa-b443-1347ac44556f to disappear Jan 11 16:29:44.136: INFO: Pod pod-configmaps-ec182e29-0d02-4efa-b443-1347ac44556f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:29:44.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4925" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":42,"skipped":929,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:29:44.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-9bf6a5b5-5a19-4c76-95f1-ff5f296d27ac STEP: Creating a pod to test consume configMaps Jan 11 16:29:44.274: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95fc10c6-fb4a-4534-abd2-8e1e97ccaa9d" in namespace "projected-2526" to be "Succeeded or Failed" Jan 11 16:29:44.293: INFO: Pod "pod-projected-configmaps-95fc10c6-fb4a-4534-abd2-8e1e97ccaa9d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.668309ms Jan 11 16:29:46.300: INFO: Pod "pod-projected-configmaps-95fc10c6-fb4a-4534-abd2-8e1e97ccaa9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025581517s Jan 11 16:29:48.309: INFO: Pod "pod-projected-configmaps-95fc10c6-fb4a-4534-abd2-8e1e97ccaa9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034301538s STEP: Saw pod success Jan 11 16:29:48.309: INFO: Pod "pod-projected-configmaps-95fc10c6-fb4a-4534-abd2-8e1e97ccaa9d" satisfied condition "Succeeded or Failed" Jan 11 16:29:48.315: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-95fc10c6-fb4a-4534-abd2-8e1e97ccaa9d container agnhost-container: STEP: delete the pod Jan 11 16:29:48.472: INFO: Waiting for pod pod-projected-configmaps-95fc10c6-fb4a-4534-abd2-8e1e97ccaa9d to disappear Jan 11 16:29:48.540: INFO: Pod pod-projected-configmaps-95fc10c6-fb4a-4534-abd2-8e1e97ccaa9d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:29:48.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2526" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":309,"completed":43,"skipped":938,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:29:48.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:29:48.683: INFO: Creating deployment "test-recreate-deployment" Jan 11 16:29:48.721: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 11 16:29:48.733: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 11 16:29:50.749: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 11 16:29:50.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979388, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979388, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979388, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979388, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-786dd7c454\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 16:29:52.761: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 11 16:29:52.779: INFO: Updating deployment test-recreate-deployment Jan 11 16:29:52.779: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 16:29:53.654: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8842 76eed2ca-da6e-4ec3-803c-64cfee98e67b 191715 2 2021-01-11 16:29:48 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-11 16:29:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-11 16:29:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x866bd58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-11 16:29:53 +0000 UTC,LastTransitionTime:2021-01-11 16:29:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-01-11 16:29:53 +0000 UTC,LastTransitionTime:2021-01-11 16:29:48 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 11 16:29:53.665: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-8842 c3a20847-773a-4f97-a939-11842ce723db 191714 1 2021-01-11 16:29:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 76eed2ca-da6e-4ec3-803c-64cfee98e67b 0xa046160 0xa046161}] [] [{kube-controller-manager Update apps/v1 2021-01-11 16:29:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"76eed2ca-da6e-4ec3-803c-64cfee98e67b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xa0461d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 16:29:53.666: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 11 16:29:53.667: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-8842 52bf04af-b15a-494b-b09c-3b6c0803f38d 191702 2 2021-01-11 16:29:48 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 76eed2ca-da6e-4ec3-803c-64cfee98e67b 0xa046067 0xa046068}] [] [{kube-controller-manager Update apps/v1 2021-01-11 16:29:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"76eed2ca-da6e-4ec3-803c-64cfee98e67b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xa0460f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 16:29:53.686: INFO: Pod "test-recreate-deployment-f79dd4667-gzjsq" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-gzjsq test-recreate-deployment-f79dd4667- deployment-8842 9bdb57c1-f809-4301-9dbb-95b7be7f6252 191712 0 2021-01-11 16:29:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 c3a20847-773a-4f97-a939-11842ce723db 0x8e72d20 0x8e72d21}] [] [{kube-controller-manager Update v1 2021-01-11 16:29:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3a20847-773a-4f97-a939-11842ce723db\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 16:29:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zvkv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zvkv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zvkv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:29:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:29:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:29:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:29:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 16:29:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:29:53.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8842" for this suite. • [SLOW TEST:5.152 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":309,"completed":44,"skipped":952,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:29:53.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:29:54.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9107" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":309,"completed":45,"skipped":965,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:29:54.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-ca6da439-d605-43db-9e4f-08ff786e69c3 STEP: Creating a pod to test consume secrets Jan 11 16:29:54.439: INFO: Waiting up to 5m0s for pod "pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca" in namespace "secrets-6103" to be "Succeeded or Failed" Jan 11 16:29:54.529: INFO: Pod "pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 89.457508ms Jan 11 16:29:56.537: INFO: Pod "pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098080591s Jan 11 16:29:58.546: INFO: Pod "pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106465788s Jan 11 16:30:00.554: INFO: Pod "pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114840337s STEP: Saw pod success Jan 11 16:30:00.554: INFO: Pod "pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca" satisfied condition "Succeeded or Failed" Jan 11 16:30:00.560: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca container secret-env-test: STEP: delete the pod Jan 11 16:30:00.595: INFO: Waiting for pod pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca to disappear Jan 11 16:30:00.618: INFO: Pod pod-secrets-a48b100f-daec-4d14-a47a-64a3d159b6ca no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:30:00.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6103" for this suite. • [SLOW TEST:6.291 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":309,"completed":46,"skipped":971,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:30:00.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:30:00.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4744" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":309,"completed":47,"skipped":983,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:30:00.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 16:30:08.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 16:30:10.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979408, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979408, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979408, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979408, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 16:30:13.251: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:30:13.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6220" for this suite. STEP: Destroying namespace "webhook-6220-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.846 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":309,"completed":48,"skipped":986,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:30:13.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 11 16:30:13.750: INFO: Waiting up to 5m0s for pod "downward-api-25dcd55f-59c8-44f5-867e-02647da2239d" in namespace "downward-api-1010" to be "Succeeded or Failed" Jan 11 16:30:13.976: INFO: Pod "downward-api-25dcd55f-59c8-44f5-867e-02647da2239d": Phase="Pending", Reason="", readiness=false. Elapsed: 225.794437ms Jan 11 16:30:15.985: INFO: Pod "downward-api-25dcd55f-59c8-44f5-867e-02647da2239d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234985347s Jan 11 16:30:17.994: INFO: Pod "downward-api-25dcd55f-59c8-44f5-867e-02647da2239d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.243394327s STEP: Saw pod success Jan 11 16:30:17.994: INFO: Pod "downward-api-25dcd55f-59c8-44f5-867e-02647da2239d" satisfied condition "Succeeded or Failed" Jan 11 16:30:18.000: INFO: Trying to get logs from node leguer-worker2 pod downward-api-25dcd55f-59c8-44f5-867e-02647da2239d container dapi-container: STEP: delete the pod Jan 11 16:30:18.057: INFO: Waiting for pod downward-api-25dcd55f-59c8-44f5-867e-02647da2239d to disappear Jan 11 16:30:18.080: INFO: Pod downward-api-25dcd55f-59c8-44f5-867e-02647da2239d no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:30:18.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1010" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":309,"completed":49,"skipped":997,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:30:18.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 11 16:30:18.203: INFO: Waiting up to 5m0s for pod "downward-api-b694683b-a7e2-48bd-9cab-99b7ee58a8c2" in namespace "downward-api-4326" to be "Succeeded or Failed" Jan 11 16:30:18.220: INFO: Pod "downward-api-b694683b-a7e2-48bd-9cab-99b7ee58a8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.321516ms Jan 11 16:30:20.227: INFO: Pod "downward-api-b694683b-a7e2-48bd-9cab-99b7ee58a8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023587923s Jan 11 16:30:22.236: INFO: Pod "downward-api-b694683b-a7e2-48bd-9cab-99b7ee58a8c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03250397s STEP: Saw pod success Jan 11 16:30:22.236: INFO: Pod "downward-api-b694683b-a7e2-48bd-9cab-99b7ee58a8c2" satisfied condition "Succeeded or Failed" Jan 11 16:30:22.242: INFO: Trying to get logs from node leguer-worker2 pod downward-api-b694683b-a7e2-48bd-9cab-99b7ee58a8c2 container dapi-container: STEP: delete the pod Jan 11 16:30:22.307: INFO: Waiting for pod downward-api-b694683b-a7e2-48bd-9cab-99b7ee58a8c2 to disappear Jan 11 16:30:22.313: INFO: Pod downward-api-b694683b-a7e2-48bd-9cab-99b7ee58a8c2 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:30:22.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4326" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":309,"completed":50,"skipped":1018,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:30:22.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 11 16:30:22.432: INFO: Waiting up to 5m0s for pod "pod-84975622-10d2-4d8c-a2f5-090e5332398a" in namespace "emptydir-5472" to be "Succeeded or Failed" Jan 11 16:30:22.441: INFO: Pod "pod-84975622-10d2-4d8c-a2f5-090e5332398a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.821609ms Jan 11 16:30:24.449: INFO: Pod "pod-84975622-10d2-4d8c-a2f5-090e5332398a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0163867s Jan 11 16:30:26.456: INFO: Pod "pod-84975622-10d2-4d8c-a2f5-090e5332398a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023731664s STEP: Saw pod success Jan 11 16:30:26.456: INFO: Pod "pod-84975622-10d2-4d8c-a2f5-090e5332398a" satisfied condition "Succeeded or Failed" Jan 11 16:30:26.462: INFO: Trying to get logs from node leguer-worker2 pod pod-84975622-10d2-4d8c-a2f5-090e5332398a container test-container: STEP: delete the pod Jan 11 16:30:26.693: INFO: Waiting for pod pod-84975622-10d2-4d8c-a2f5-090e5332398a to disappear Jan 11 16:30:26.754: INFO: Pod pod-84975622-10d2-4d8c-a2f5-090e5332398a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:30:26.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5472" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":51,"skipped":1032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:30:26.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:30:26.964: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Pending, waiting for it to be Running (with Ready = true) Jan 11 16:30:29.024: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Pending, waiting for it to be Running (with Ready = true) Jan 11 16:30:30.973: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:32.973: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:34.980: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:36.973: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:38.986: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:40.972: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:42.971: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:44.971: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:46.969: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:48.984: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = false) Jan 11 16:30:50.972: INFO: The status of Pod test-webserver-9d0c1eed-9503-4aa4-8d95-e289cb71bafd is Running (Ready = true) Jan 11 16:30:50.978: INFO: Container started at 2021-01-11 16:30:29 +0000 UTC, pod became ready at 2021-01-11 16:30:49 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:30:50.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3065" for this suite. • [SLOW TEST:24.164 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":309,"completed":52,"skipped":1068,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:30:51.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0111 16:31:32.450572 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 16:32:34.477: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 11 16:32:34.477: INFO: Deleting pod "simpletest.rc-8vvg4" in namespace "gc-5863" Jan 11 16:32:34.518: INFO: Deleting pod "simpletest.rc-d5fpx" in namespace "gc-5863" Jan 11 16:32:34.610: INFO: Deleting pod "simpletest.rc-f2g9x" in namespace "gc-5863" Jan 11 16:32:34.658: INFO: Deleting pod "simpletest.rc-gbhjb" in namespace "gc-5863" Jan 11 16:32:34.910: INFO: Deleting pod "simpletest.rc-k5xb9" in namespace "gc-5863" Jan 11 16:32:35.145: INFO: Deleting pod "simpletest.rc-lbrkk" in namespace "gc-5863" Jan 11 16:32:35.203: INFO: Deleting pod "simpletest.rc-mnvmr" in namespace "gc-5863" Jan 11 16:32:35.729: INFO: Deleting pod "simpletest.rc-q96js" in namespace "gc-5863" Jan 11 16:32:36.092: INFO: Deleting pod "simpletest.rc-sblsc" in namespace "gc-5863" Jan 11 16:32:36.321: INFO: Deleting pod "simpletest.rc-xcjrb" in namespace "gc-5863" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:32:36.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5863" for this suite. • [SLOW TEST:105.574 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":309,"completed":53,"skipped":1069,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:32:36.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:32:36.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7432 version' Jan 11 16:32:38.868: INFO: stderr: "" Jan 11 16:32:38.869: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.1\", GitCommit:\"c4d752765b3bbac2237bf87cf0b1c2e307844666\", GitTreeState:\"clean\", BuildDate:\"2020-12-18T12:09:25Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.0\", GitCommit:\"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38\", GitTreeState:\"clean\", BuildDate:\"2020-12-08T22:31:47Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:32:38.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7432" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":309,"completed":54,"skipped":1081,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:32:38.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 16:32:53.174: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 16:32:55.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979573, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979573, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979573, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979573, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 16:32:58.473: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:32:58.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8992" for this suite. STEP: Destroying namespace "webhook-8992-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:19.896 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":309,"completed":55,"skipped":1092,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:32:58.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:32:58.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c52ce95c-db9b-4cb1-8cbf-ce0072dc0803" in namespace "downward-api-9153" to be "Succeeded or Failed" Jan 11 16:32:58.914: INFO: Pod "downwardapi-volume-c52ce95c-db9b-4cb1-8cbf-ce0072dc0803": Phase="Pending", Reason="", readiness=false. Elapsed: 37.802224ms Jan 11 16:33:01.011: INFO: Pod "downwardapi-volume-c52ce95c-db9b-4cb1-8cbf-ce0072dc0803": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134113556s Jan 11 16:33:03.017: INFO: Pod "downwardapi-volume-c52ce95c-db9b-4cb1-8cbf-ce0072dc0803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140838679s STEP: Saw pod success Jan 11 16:33:03.018: INFO: Pod "downwardapi-volume-c52ce95c-db9b-4cb1-8cbf-ce0072dc0803" satisfied condition "Succeeded or Failed" Jan 11 16:33:03.023: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-c52ce95c-db9b-4cb1-8cbf-ce0072dc0803 container client-container: STEP: delete the pod Jan 11 16:33:03.150: INFO: Waiting for pod downwardapi-volume-c52ce95c-db9b-4cb1-8cbf-ce0072dc0803 to disappear Jan 11 16:33:03.158: INFO: Pod downwardapi-volume-c52ce95c-db9b-4cb1-8cbf-ce0072dc0803 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:33:03.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9153" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":309,"completed":56,"skipped":1103,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:33:03.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name projected-secret-test-d677fcda-26b9-44b3-85f2-e346a6ef16e4 STEP: Creating a pod to test consume secrets Jan 11 16:33:03.326: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-86735e0d-b869-40d6-9503-c0a65a79e038" in namespace "projected-3077" to be "Succeeded or Failed" Jan 11 16:33:03.332: INFO: Pod "pod-projected-secrets-86735e0d-b869-40d6-9503-c0a65a79e038": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406499ms Jan 11 16:33:05.341: INFO: Pod "pod-projected-secrets-86735e0d-b869-40d6-9503-c0a65a79e038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015096687s Jan 11 16:33:07.349: INFO: Pod "pod-projected-secrets-86735e0d-b869-40d6-9503-c0a65a79e038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023166979s STEP: Saw pod success Jan 11 16:33:07.349: INFO: Pod "pod-projected-secrets-86735e0d-b869-40d6-9503-c0a65a79e038" satisfied condition "Succeeded or Failed" Jan 11 16:33:07.355: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-86735e0d-b869-40d6-9503-c0a65a79e038 container secret-volume-test: STEP: delete the pod Jan 11 16:33:07.657: INFO: Waiting for pod pod-projected-secrets-86735e0d-b869-40d6-9503-c0a65a79e038 to disappear Jan 11 16:33:07.760: INFO: Pod pod-projected-secrets-86735e0d-b869-40d6-9503-c0a65a79e038 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:33:07.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3077" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":309,"completed":57,"skipped":1125,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:33:07.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC Jan 11 16:33:07.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2500 create -f -' Jan 11 16:33:13.661: INFO: stderr: "" Jan 11 16:33:13.662: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 11 16:33:14.674: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 16:33:14.675: INFO: Found 0 / 1 Jan 11 16:33:15.672: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 16:33:15.672: INFO: Found 0 / 1 Jan 11 16:33:16.671: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 16:33:16.672: INFO: Found 0 / 1 Jan 11 16:33:17.671: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 16:33:17.672: INFO: Found 1 / 1 Jan 11 16:33:17.672: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 11 16:33:17.678: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 16:33:17.678: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 16:33:17.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2500 patch pod agnhost-primary-xrdjt -p {"metadata":{"annotations":{"x":"y"}}}' Jan 11 16:33:18.905: INFO: stderr: "" Jan 11 16:33:18.905: INFO: stdout: "pod/agnhost-primary-xrdjt patched\n" STEP: checking annotations Jan 11 16:33:18.911: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 16:33:18.912: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:33:18.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2500" for this suite. • [SLOW TEST:11.166 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":309,"completed":58,"skipped":1132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:33:18.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:33:19.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9553" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":309,"completed":59,"skipped":1156,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:33:19.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on node default medium Jan 11 16:33:19.185: INFO: Waiting up to 5m0s for pod "pod-b5f982d1-19e1-4a65-8928-771be081d4c5" in namespace "emptydir-5074" to be "Succeeded or Failed" Jan 11 16:33:19.221: INFO: Pod "pod-b5f982d1-19e1-4a65-8928-771be081d4c5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.619524ms Jan 11 16:33:21.230: INFO: Pod "pod-b5f982d1-19e1-4a65-8928-771be081d4c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044600932s Jan 11 16:33:23.238: INFO: Pod "pod-b5f982d1-19e1-4a65-8928-771be081d4c5": Phase="Running", Reason="", readiness=true. Elapsed: 4.05321662s Jan 11 16:33:25.257: INFO: Pod "pod-b5f982d1-19e1-4a65-8928-771be081d4c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071717545s STEP: Saw pod success Jan 11 16:33:25.257: INFO: Pod "pod-b5f982d1-19e1-4a65-8928-771be081d4c5" satisfied condition "Succeeded or Failed" Jan 11 16:33:25.262: INFO: Trying to get logs from node leguer-worker2 pod pod-b5f982d1-19e1-4a65-8928-771be081d4c5 container test-container: STEP: delete the pod Jan 11 16:33:25.289: INFO: Waiting for pod pod-b5f982d1-19e1-4a65-8928-771be081d4c5 to disappear Jan 11 16:33:25.293: INFO: Pod pod-b5f982d1-19e1-4a65-8928-771be081d4c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:33:25.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5074" for this suite. • [SLOW TEST:6.249 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":60,"skipped":1156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:33:25.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 11 16:33:25.395: INFO: Waiting up to 5m0s for pod "downward-api-be6bd35b-9406-4979-bfea-e6e8ba080857" in namespace "downward-api-6270" to be "Succeeded or Failed" Jan 11 16:33:25.408: INFO: Pod "downward-api-be6bd35b-9406-4979-bfea-e6e8ba080857": Phase="Pending", Reason="", readiness=false. Elapsed: 13.013049ms Jan 11 16:33:27.417: INFO: Pod "downward-api-be6bd35b-9406-4979-bfea-e6e8ba080857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021927773s Jan 11 16:33:29.423: INFO: Pod "downward-api-be6bd35b-9406-4979-bfea-e6e8ba080857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027696197s STEP: Saw pod success Jan 11 16:33:29.423: INFO: Pod "downward-api-be6bd35b-9406-4979-bfea-e6e8ba080857" satisfied condition "Succeeded or Failed" Jan 11 16:33:29.427: INFO: Trying to get logs from node leguer-worker2 pod downward-api-be6bd35b-9406-4979-bfea-e6e8ba080857 container dapi-container: STEP: delete the pod Jan 11 16:33:29.509: INFO: Waiting for pod downward-api-be6bd35b-9406-4979-bfea-e6e8ba080857 to disappear Jan 11 16:33:29.526: INFO: Pod downward-api-be6bd35b-9406-4979-bfea-e6e8ba080857 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:33:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6270" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":309,"completed":61,"skipped":1184,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:33:29.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD Jan 11 16:33:29.644: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:35:45.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6987" for this suite. • [SLOW TEST:135.850 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":309,"completed":62,"skipped":1193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:35:45.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pods Jan 11 16:35:45.557: INFO: created test-pod-1 Jan 11 16:35:45.571: INFO: created test-pod-2 Jan 11 16:35:45.588: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:35:45.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8221" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":309,"completed":63,"skipped":1217,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:35:45.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-7904 Jan 11 16:35:49.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7904 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 11 16:35:51.413: INFO: stderr: "I0111 16:35:51.286959 260 log.go:181] (0x24e7650) (0x24e76c0) Create stream\nI0111 16:35:51.290285 260 log.go:181] (0x24e7650) (0x24e76c0) Stream added, broadcasting: 1\nI0111 16:35:51.309842 260 log.go:181] (0x24e7650) Reply frame received for 1\nI0111 16:35:51.310369 260 log.go:181] (0x24e7650) (0x285a0e0) Create stream\nI0111 16:35:51.310439 260 log.go:181] (0x24e7650) (0x285a0e0) Stream added, broadcasting: 3\nI0111 16:35:51.311802 260 log.go:181] (0x24e7650) Reply frame received for 3\nI0111 16:35:51.312098 260 log.go:181] (0x24e7650) (0x24e6070) Create stream\nI0111 16:35:51.312181 260 log.go:181] (0x24e7650) (0x24e6070) Stream added, broadcasting: 5\nI0111 16:35:51.313335 260 log.go:181] (0x24e7650) Reply frame received for 5\nI0111 16:35:51.387981 260 log.go:181] (0x24e7650) Data frame received for 5\nI0111 16:35:51.388232 260 log.go:181] (0x24e6070) (5) Data frame handling\nI0111 16:35:51.388708 260 log.go:181] (0x24e6070) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0111 16:35:51.394216 260 log.go:181] (0x24e7650) Data frame received for 3\nI0111 16:35:51.394417 260 log.go:181] (0x285a0e0) (3) Data frame handling\nI0111 16:35:51.394650 260 log.go:181] (0x285a0e0) (3) Data frame sent\nI0111 16:35:51.395236 260 log.go:181] (0x24e7650) Data frame received for 5\nI0111 16:35:51.395428 260 log.go:181] (0x24e6070) (5) Data frame handling\nI0111 16:35:51.395584 260 log.go:181] (0x24e7650) Data frame received for 3\nI0111 16:35:51.395686 260 log.go:181] (0x285a0e0) (3) Data frame handling\nI0111 16:35:51.397396 260 log.go:181] (0x24e7650) Data frame received for 1\nI0111 16:35:51.397491 260 log.go:181] (0x24e76c0) (1) Data frame handling\nI0111 16:35:51.397600 260 log.go:181] (0x24e76c0) (1) Data frame sent\nI0111 16:35:51.398116 260 log.go:181] (0x24e7650) (0x24e76c0) Stream removed, broadcasting: 1\nI0111 16:35:51.400645 260 log.go:181] (0x24e7650) Go away received\nI0111 16:35:51.404059 260 log.go:181] (0x24e7650) (0x24e76c0) Stream removed, broadcasting: 1\nI0111 16:35:51.404512 260 log.go:181] (0x24e7650) (0x285a0e0) Stream removed, broadcasting: 3\nI0111 16:35:51.404758 260 log.go:181] (0x24e7650) (0x24e6070) Stream removed, broadcasting: 5\n" Jan 11 16:35:51.414: INFO: stdout: "iptables" Jan 11 16:35:51.414: INFO: proxyMode: iptables Jan 11 16:35:51.428: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 11 16:35:51.451: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7904 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7904 I0111 16:35:51.554408 10 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7904, replica count: 3 I0111 16:35:54.605776 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:35:57.606643 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:36:00.607703 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 16:36:00.632: INFO: Creating new exec pod Jan 11 16:36:05.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7904 exec execpod-affinityhc896 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jan 11 16:36:07.190: INFO: stderr: "I0111 16:36:07.044106 280 log.go:181] (0x30a0000) (0x30a0070) Create stream\nI0111 16:36:07.046979 280 log.go:181] (0x30a0000) (0x30a0070) Stream added, broadcasting: 1\nI0111 16:36:07.067077 280 log.go:181] (0x30a0000) Reply frame received for 1\nI0111 16:36:07.067675 280 log.go:181] (0x30a0000) (0x2792540) Create stream\nI0111 16:36:07.067770 280 log.go:181] (0x30a0000) (0x2792540) Stream added, broadcasting: 3\nI0111 16:36:07.069489 280 log.go:181] (0x30a0000) Reply frame received for 3\nI0111 16:36:07.069793 280 log.go:181] (0x30a0000) (0x30a0150) Create stream\nI0111 16:36:07.069880 280 log.go:181] (0x30a0000) (0x30a0150) Stream added, broadcasting: 5\nI0111 16:36:07.071335 280 log.go:181] (0x30a0000) Reply frame received for 5\nI0111 16:36:07.151555 280 log.go:181] (0x30a0000) Data frame received for 5\nI0111 16:36:07.151936 280 log.go:181] (0x30a0150) (5) Data frame handling\nI0111 16:36:07.152619 280 log.go:181] (0x30a0150) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0111 16:36:07.172052 280 log.go:181] (0x30a0000) Data frame received for 5\nI0111 16:36:07.172187 280 log.go:181] (0x30a0150) (5) Data frame handling\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0111 16:36:07.172291 280 log.go:181] (0x30a0000) Data frame received for 3\nI0111 16:36:07.172506 280 log.go:181] (0x2792540) (3) Data frame handling\nI0111 16:36:07.172784 280 log.go:181] (0x30a0150) (5) Data frame sent\nI0111 16:36:07.173099 280 log.go:181] (0x30a0000) Data frame received for 5\nI0111 16:36:07.173241 280 log.go:181] (0x30a0150) (5) Data frame handling\nI0111 16:36:07.174247 280 log.go:181] (0x30a0000) Data frame received for 1\nI0111 16:36:07.174365 280 log.go:181] (0x30a0070) (1) Data frame handling\nI0111 16:36:07.174486 280 log.go:181] (0x30a0070) (1) Data frame sent\nI0111 16:36:07.175110 280 log.go:181] (0x30a0000) (0x30a0070) Stream removed, broadcasting: 1\nI0111 16:36:07.178172 280 log.go:181] (0x30a0000) Go away received\nI0111 16:36:07.180004 280 log.go:181] (0x30a0000) (0x30a0070) Stream removed, broadcasting: 1\nI0111 16:36:07.180591 280 log.go:181] (0x30a0000) (0x2792540) Stream removed, broadcasting: 3\nI0111 16:36:07.180946 280 log.go:181] (0x30a0000) (0x30a0150) Stream removed, broadcasting: 5\n" Jan 11 16:36:07.191: INFO: stdout: "" Jan 11 16:36:07.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7904 exec execpod-affinityhc896 -- /bin/sh -x -c nc -zv -t -w 2 10.96.192.55 80' Jan 11 16:36:08.646: INFO: stderr: "I0111 16:36:08.559455 300 log.go:181] (0x2b90000) (0x2b90070) Create stream\nI0111 16:36:08.561209 300 log.go:181] (0x2b90000) (0x2b90070) Stream added, broadcasting: 1\nI0111 16:36:08.568673 300 log.go:181] (0x2b90000) Reply frame received for 1\nI0111 16:36:08.569126 300 log.go:181] (0x2b90000) (0x2b902a0) Create stream\nI0111 16:36:08.569180 300 log.go:181] (0x2b90000) (0x2b902a0) Stream added, broadcasting: 3\nI0111 16:36:08.570414 300 log.go:181] (0x2b90000) Reply frame received for 3\nI0111 16:36:08.570879 300 log.go:181] (0x2b90000) (0x27521c0) Create stream\nI0111 16:36:08.570980 300 log.go:181] (0x2b90000) (0x27521c0) Stream added, broadcasting: 5\nI0111 16:36:08.572333 300 log.go:181] (0x2b90000) Reply frame received for 5\nI0111 16:36:08.628538 300 log.go:181] (0x2b90000) Data frame received for 3\nI0111 16:36:08.628908 300 log.go:181] (0x2b902a0) (3) Data frame handling\nI0111 16:36:08.629245 300 log.go:181] (0x2b90000) Data frame received for 5\nI0111 16:36:08.629500 300 log.go:181] (0x27521c0) (5) Data frame handling\nI0111 16:36:08.629861 300 log.go:181] (0x2b90000) Data frame received for 1\nI0111 16:36:08.630021 300 log.go:181] (0x2b90070) (1) Data frame handling\nI0111 16:36:08.631095 300 log.go:181] (0x27521c0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.192.55 80\nConnection to 10.96.192.55 80 port [tcp/http] succeeded!\nI0111 16:36:08.631397 300 log.go:181] (0x2b90070) (1) Data frame sent\nI0111 16:36:08.631803 300 log.go:181] (0x2b90000) Data frame received for 5\nI0111 16:36:08.631943 300 log.go:181] (0x27521c0) (5) Data frame handling\nI0111 16:36:08.632334 300 log.go:181] (0x2b90000) (0x2b90070) Stream removed, broadcasting: 1\nI0111 16:36:08.634390 300 log.go:181] (0x2b90000) Go away received\nI0111 16:36:08.637245 300 log.go:181] (0x2b90000) (0x2b90070) Stream removed, broadcasting: 1\nI0111 16:36:08.637464 300 log.go:181] (0x2b90000) (0x2b902a0) Stream removed, broadcasting: 3\nI0111 16:36:08.637642 300 log.go:181] (0x2b90000) (0x27521c0) Stream removed, broadcasting: 5\n" Jan 11 16:36:08.647: INFO: stdout: "" Jan 11 16:36:08.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7904 exec execpod-affinityhc896 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30806' Jan 11 16:36:10.143: INFO: stderr: "I0111 16:36:10.027129 320 log.go:181] (0x28f2230) (0x28f22a0) Create stream\nI0111 16:36:10.040505 320 log.go:181] (0x28f2230) (0x28f22a0) Stream added, broadcasting: 1\nI0111 16:36:10.051762 320 log.go:181] (0x28f2230) Reply frame received for 1\nI0111 16:36:10.052326 320 log.go:181] (0x28f2230) (0x29e8150) Create stream\nI0111 16:36:10.052399 320 log.go:181] (0x28f2230) (0x29e8150) Stream added, broadcasting: 3\nI0111 16:36:10.054024 320 log.go:181] (0x28f2230) Reply frame received for 3\nI0111 16:36:10.054464 320 log.go:181] (0x28f2230) (0x247cc40) Create stream\nI0111 16:36:10.054560 320 log.go:181] (0x28f2230) (0x247cc40) Stream added, broadcasting: 5\nI0111 16:36:10.055780 320 log.go:181] (0x28f2230) Reply frame received for 5\nI0111 16:36:10.128632 320 log.go:181] (0x28f2230) Data frame received for 3\nI0111 16:36:10.128969 320 log.go:181] (0x28f2230) Data frame received for 5\nI0111 16:36:10.129076 320 log.go:181] (0x29e8150) (3) Data frame handling\nI0111 16:36:10.129300 320 log.go:181] (0x247cc40) (5) Data frame handling\nI0111 16:36:10.129538 320 log.go:181] (0x28f2230) Data frame received for 1\nI0111 16:36:10.129641 320 log.go:181] (0x28f22a0) (1) Data frame handling\nI0111 16:36:10.130486 320 log.go:181] (0x28f22a0) (1) Data frame sent\nI0111 16:36:10.130563 320 log.go:181] (0x247cc40) (5) Data frame sent\nI0111 16:36:10.130660 320 log.go:181] (0x28f2230) Data frame received for 5\nI0111 16:36:10.130716 320 log.go:181] (0x247cc40) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30806\nConnection to 172.18.0.13 30806 port [tcp/30806] succeeded!\nI0111 16:36:10.132051 320 log.go:181] (0x28f2230) (0x28f22a0) Stream removed, broadcasting: 1\nI0111 16:36:10.132719 320 log.go:181] (0x28f2230) Go away received\nI0111 16:36:10.135431 320 log.go:181] (0x28f2230) (0x28f22a0) Stream removed, broadcasting: 1\nI0111 16:36:10.135644 320 log.go:181] (0x28f2230) (0x29e8150) Stream removed, broadcasting: 3\nI0111 16:36:10.135815 320 log.go:181] (0x28f2230) (0x247cc40) Stream removed, broadcasting: 5\n" Jan 11 16:36:10.144: INFO: stdout: "" Jan 11 16:36:10.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7904 exec execpod-affinityhc896 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30806' Jan 11 16:36:12.291: INFO: stderr: "I0111 16:36:12.190695 341 log.go:181] (0x2671030) (0x26710a0) Create stream\nI0111 16:36:12.194233 341 log.go:181] (0x2671030) (0x26710a0) Stream added, broadcasting: 1\nI0111 16:36:12.213135 341 log.go:181] (0x2671030) Reply frame received for 1\nI0111 16:36:12.213622 341 log.go:181] (0x2671030) (0x29940e0) Create stream\nI0111 16:36:12.213690 341 log.go:181] (0x2671030) (0x29940e0) Stream added, broadcasting: 3\nI0111 16:36:12.214939 341 log.go:181] (0x2671030) Reply frame received for 3\nI0111 16:36:12.215211 341 log.go:181] (0x2671030) (0x2670070) Create stream\nI0111 16:36:12.215282 341 log.go:181] (0x2671030) (0x2670070) Stream added, broadcasting: 5\nI0111 16:36:12.216170 341 log.go:181] (0x2671030) Reply frame received for 5\nI0111 16:36:12.274343 341 log.go:181] (0x2671030) Data frame received for 5\nI0111 16:36:12.274689 341 log.go:181] (0x2670070) (5) Data frame handling\nI0111 16:36:12.275104 341 log.go:181] (0x2671030) Data frame received for 3\nI0111 16:36:12.275268 341 log.go:181] (0x2670070) (5) Data frame sent\nI0111 16:36:12.275524 341 log.go:181] (0x2671030) Data frame received for 1\nI0111 16:36:12.275745 341 log.go:181] (0x26710a0) (1) Data frame handling\nI0111 16:36:12.275946 341 log.go:181] (0x29940e0) (3) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30806\nConnection to 172.18.0.12 30806 port [tcp/30806] succeeded!\nI0111 16:36:12.276329 341 log.go:181] (0x2671030) Data frame received for 5\nI0111 16:36:12.276585 341 log.go:181] (0x2670070) (5) Data frame handling\nI0111 16:36:12.277026 341 log.go:181] (0x26710a0) (1) Data frame sent\nI0111 16:36:12.278137 341 log.go:181] (0x2671030) (0x26710a0) Stream removed, broadcasting: 1\nI0111 16:36:12.280224 341 log.go:181] (0x2671030) Go away received\nI0111 16:36:12.283432 341 log.go:181] (0x2671030) (0x26710a0) Stream removed, broadcasting: 1\nI0111 16:36:12.283628 341 log.go:181] (0x2671030) (0x29940e0) Stream removed, broadcasting: 3\nI0111 16:36:12.283767 341 log.go:181] (0x2671030) (0x2670070) Stream removed, broadcasting: 5\n" Jan 11 16:36:12.293: INFO: stdout: "" Jan 11 16:36:12.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7904 exec execpod-affinityhc896 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:30806/ ; done' Jan 11 16:36:13.835: INFO: stderr: "I0111 16:36:13.586846 361 log.go:181] (0x30bbd50) (0x30bbdc0) Create stream\nI0111 16:36:13.590667 361 log.go:181] (0x30bbd50) (0x30bbdc0) Stream added, broadcasting: 1\nI0111 16:36:13.602223 361 log.go:181] (0x30bbd50) Reply frame received for 1\nI0111 16:36:13.603443 361 log.go:181] (0x30bbd50) (0x2f0c070) Create stream\nI0111 16:36:13.603589 361 log.go:181] (0x30bbd50) (0x2f0c070) Stream added, broadcasting: 3\nI0111 16:36:13.605588 361 log.go:181] (0x30bbd50) Reply frame received for 3\nI0111 16:36:13.605836 361 log.go:181] (0x30bbd50) (0x2f0c2a0) Create stream\nI0111 16:36:13.605907 361 log.go:181] (0x30bbd50) (0x2f0c2a0) Stream added, broadcasting: 5\nI0111 16:36:13.607308 361 log.go:181] (0x30bbd50) Reply frame received for 5\nI0111 16:36:13.723050 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.724373 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.725642 361 log.go:181] (0x30bbd50) Data frame received for 3\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.725881 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.729959 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.731987 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\nI0111 16:36:13.735177 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.735331 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.735475 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.735635 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.735770 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.735937 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.736017 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.736082 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.736140 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.736192 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.736540 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.736599 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.736649 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.736704 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.736767 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\nI0111 16:36:13.736819 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.736941 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.737007 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.739320 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.739395 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.739483 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.740020 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.740133 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.740206 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.740318 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.740407 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.740521 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.743742 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.743810 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.743877 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.744229 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.744304 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.744385 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.745419 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.745503 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.745597 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.748649 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.748740 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.748905 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.749403 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.749539 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.749651 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.749740 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.749822 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.749920 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\nI0111 16:36:13.750009 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.750107 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.750243 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.753094 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.753197 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.753327 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.754091 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.754228 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.754382 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.754534 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.754648 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.754754 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\nI0111 16:36:13.758165 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.758284 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.758409 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.758502 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.758629 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.758741 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.758861 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.758970 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\nI0111 16:36:13.759059 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.762875 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.762982 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.763072 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.763650 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.763748 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.763830 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.763907 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.763992 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.764089 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.769265 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.769353 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.769452 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.770185 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.770305 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.770385 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.770505 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.770603 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.770700 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.775033 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.775188 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.775375 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.775919 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.776045 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.776146 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.776240 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.776329 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.776441 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.780230 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.780378 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.780600 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.781273 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.781375 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.781463 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.781546 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.781618 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.781708 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.787174 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.787316 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.787469 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.787935 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.788059 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.788198 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.788407 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.788558 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.788786 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\nI0111 16:36:13.794439 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.794576 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.794729 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.797026 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.797155 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.797265 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.797367 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.797452 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.797559 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.802389 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.802514 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.802653 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.802933 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.803091 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.803196 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.803646 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.803767 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\nI0111 16:36:13.803905 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.809534 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.809684 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.809831 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.810417 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.810584 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.810760 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.811013 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:13.811212 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.811344 361 log.go:181] (0x2f0c2a0) (5) Data frame sent\nI0111 16:36:13.817406 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.817581 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.817794 361 log.go:181] (0x2f0c070) (3) Data frame sent\nI0111 16:36:13.818139 361 log.go:181] (0x30bbd50) Data frame received for 3\nI0111 16:36:13.818249 361 log.go:181] (0x2f0c070) (3) Data frame handling\nI0111 16:36:13.818400 361 log.go:181] (0x30bbd50) Data frame received for 5\nI0111 16:36:13.818552 361 log.go:181] (0x2f0c2a0) (5) Data frame handling\nI0111 16:36:13.820643 361 log.go:181] (0x30bbd50) Data frame received for 1\nI0111 16:36:13.820797 361 log.go:181] (0x30bbdc0) (1) Data frame handling\nI0111 16:36:13.821091 361 log.go:181] (0x30bbdc0) (1) Data frame sent\nI0111 16:36:13.821980 361 log.go:181] (0x30bbd50) (0x30bbdc0) Stream removed, broadcasting: 1\nI0111 16:36:13.825031 361 log.go:181] (0x30bbd50) Go away received\nI0111 16:36:13.827415 361 log.go:181] (0x30bbd50) (0x30bbdc0) Stream removed, broadcasting: 1\nI0111 16:36:13.827595 361 log.go:181] (0x30bbd50) (0x2f0c070) Stream removed, broadcasting: 3\nI0111 16:36:13.827758 361 log.go:181] (0x30bbd50) (0x2f0c2a0) Stream removed, broadcasting: 5\n" Jan 11 16:36:13.843: INFO: stdout: "\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk\naffinity-nodeport-timeout-zcdbk" Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.844: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.845: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.845: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.845: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.845: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.845: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.845: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.845: INFO: Received response from host: affinity-nodeport-timeout-zcdbk Jan 11 16:36:13.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7904 exec execpod-affinityhc896 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:30806/' Jan 11 16:36:15.377: INFO: stderr: "I0111 16:36:15.226969 381 log.go:181] (0x292ae00) (0x292ae70) Create stream\nI0111 16:36:15.229774 381 log.go:181] (0x292ae00) (0x292ae70) Stream added, broadcasting: 1\nI0111 16:36:15.242365 381 log.go:181] (0x292ae00) Reply frame received for 1\nI0111 16:36:15.243409 381 log.go:181] (0x292ae00) (0x251ed90) Create stream\nI0111 16:36:15.243553 381 log.go:181] (0x292ae00) (0x251ed90) Stream added, broadcasting: 3\nI0111 16:36:15.245586 381 log.go:181] (0x292ae00) Reply frame received for 3\nI0111 16:36:15.245842 381 log.go:181] (0x292ae00) (0x27bfc00) Create stream\nI0111 16:36:15.245926 381 log.go:181] (0x292ae00) (0x27bfc00) Stream added, broadcasting: 5\nI0111 16:36:15.247265 381 log.go:181] (0x292ae00) Reply frame received for 5\nI0111 16:36:15.355673 381 log.go:181] (0x292ae00) Data frame received for 5\nI0111 16:36:15.355945 381 log.go:181] (0x27bfc00) (5) Data frame handling\nI0111 16:36:15.356298 381 log.go:181] (0x27bfc00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:15.359339 381 log.go:181] (0x292ae00) Data frame received for 3\nI0111 16:36:15.359523 381 log.go:181] (0x251ed90) (3) Data frame handling\nI0111 16:36:15.359676 381 log.go:181] (0x251ed90) (3) Data frame sent\nI0111 16:36:15.360319 381 log.go:181] (0x292ae00) Data frame received for 3\nI0111 16:36:15.360483 381 log.go:181] (0x251ed90) (3) Data frame handling\nI0111 16:36:15.360590 381 log.go:181] (0x292ae00) Data frame received for 5\nI0111 16:36:15.360717 381 log.go:181] (0x27bfc00) (5) Data frame handling\nI0111 16:36:15.362712 381 log.go:181] (0x292ae00) Data frame received for 1\nI0111 16:36:15.362820 381 log.go:181] (0x292ae70) (1) Data frame handling\nI0111 16:36:15.362917 381 log.go:181] (0x292ae70) (1) Data frame sent\nI0111 16:36:15.363426 381 log.go:181] (0x292ae00) (0x292ae70) Stream removed, broadcasting: 1\nI0111 16:36:15.365742 381 log.go:181] (0x292ae00) Go away received\nI0111 16:36:15.369606 381 log.go:181] (0x292ae00) (0x292ae70) Stream removed, broadcasting: 1\nI0111 16:36:15.369806 381 log.go:181] (0x292ae00) (0x251ed90) Stream removed, broadcasting: 3\nI0111 16:36:15.369978 381 log.go:181] (0x292ae00) (0x27bfc00) Stream removed, broadcasting: 5\n" Jan 11 16:36:15.378: INFO: stdout: "affinity-nodeport-timeout-zcdbk" Jan 11 16:36:35.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7904 exec execpod-affinityhc896 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:30806/' Jan 11 16:36:36.897: INFO: stderr: "I0111 16:36:36.747170 401 log.go:181] (0x269a1c0) (0x269a460) Create stream\nI0111 16:36:36.748980 401 log.go:181] (0x269a1c0) (0x269a460) Stream added, broadcasting: 1\nI0111 16:36:36.758207 401 log.go:181] (0x269a1c0) Reply frame received for 1\nI0111 16:36:36.758901 401 log.go:181] (0x269a1c0) (0x269a690) Create stream\nI0111 16:36:36.758987 401 log.go:181] (0x269a1c0) (0x269a690) Stream added, broadcasting: 3\nI0111 16:36:36.760665 401 log.go:181] (0x269a1c0) Reply frame received for 3\nI0111 16:36:36.761133 401 log.go:181] (0x269a1c0) (0x247ccb0) Create stream\nI0111 16:36:36.761235 401 log.go:181] (0x269a1c0) (0x247ccb0) Stream added, broadcasting: 5\nI0111 16:36:36.762949 401 log.go:181] (0x269a1c0) Reply frame received for 5\nI0111 16:36:36.855926 401 log.go:181] (0x269a1c0) Data frame received for 5\nI0111 16:36:36.856285 401 log.go:181] (0x247ccb0) (5) Data frame handling\nI0111 16:36:36.857027 401 log.go:181] (0x247ccb0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30806/\nI0111 16:36:36.858715 401 log.go:181] (0x269a1c0) Data frame received for 3\nI0111 16:36:36.858803 401 log.go:181] (0x269a690) (3) Data frame handling\nI0111 16:36:36.858896 401 log.go:181] (0x269a690) (3) Data frame sent\nI0111 16:36:36.867898 401 log.go:181] (0x269a1c0) Data frame received for 3\nI0111 16:36:36.870844 401 log.go:181] (0x269a690) (3) Data frame handling\nI0111 16:36:36.873633 401 log.go:181] (0x269a1c0) Data frame received for 5\nI0111 16:36:36.873803 401 log.go:181] (0x247ccb0) (5) Data frame handling\nI0111 16:36:36.880959 401 log.go:181] (0x269a1c0) Data frame received for 1\nI0111 16:36:36.882363 401 log.go:181] (0x269a460) (1) Data frame handling\nI0111 16:36:36.882688 401 log.go:181] (0x269a460) (1) Data frame sent\nI0111 16:36:36.884633 401 log.go:181] (0x269a1c0) (0x269a460) Stream removed, broadcasting: 1\nI0111 16:36:36.886749 401 log.go:181] (0x269a1c0) Go away received\nI0111 16:36:36.888368 401 log.go:181] (0x269a1c0) (0x269a460) Stream removed, broadcasting: 1\nI0111 16:36:36.888613 401 log.go:181] (0x269a1c0) (0x269a690) Stream removed, broadcasting: 3\nI0111 16:36:36.889493 401 log.go:181] (0x269a1c0) (0x247ccb0) Stream removed, broadcasting: 5\n" Jan 11 16:36:36.898: INFO: stdout: "affinity-nodeport-timeout-fdxwx" Jan 11 16:36:36.898: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7904, will wait for the garbage collector to delete the pods Jan 11 16:36:37.035: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 8.865837ms Jan 11 16:36:37.736: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 700.898427ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:37:10.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7904" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:84.374 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":64,"skipped":1234,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:37:10.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:37:10.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c58e637e-0321-4ae6-9572-3ddb71c12f63" in namespace "downward-api-9449" to be "Succeeded or Failed" Jan 11 16:37:10.373: INFO: Pod "downwardapi-volume-c58e637e-0321-4ae6-9572-3ddb71c12f63": Phase="Pending", Reason="", readiness=false. Elapsed: 25.555748ms Jan 11 16:37:12.382: INFO: Pod "downwardapi-volume-c58e637e-0321-4ae6-9572-3ddb71c12f63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035259598s Jan 11 16:37:14.391: INFO: Pod "downwardapi-volume-c58e637e-0321-4ae6-9572-3ddb71c12f63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044115576s STEP: Saw pod success Jan 11 16:37:14.391: INFO: Pod "downwardapi-volume-c58e637e-0321-4ae6-9572-3ddb71c12f63" satisfied condition "Succeeded or Failed" Jan 11 16:37:14.397: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-c58e637e-0321-4ae6-9572-3ddb71c12f63 container client-container: STEP: delete the pod Jan 11 16:37:14.451: INFO: Waiting for pod downwardapi-volume-c58e637e-0321-4ae6-9572-3ddb71c12f63 to disappear Jan 11 16:37:14.464: INFO: Pod downwardapi-volume-c58e637e-0321-4ae6-9572-3ddb71c12f63 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:37:14.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9449" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":65,"skipped":1244,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:37:14.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 11 16:37:14.646: INFO: >>> kubeConfig: /root/.kube/config Jan 11 16:37:37.496: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:38:57.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7479" for this suite. • [SLOW TEST:102.704 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":309,"completed":66,"skipped":1247,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:38:57.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:38:57.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1600" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":309,"completed":67,"skipped":1266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:38:57.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 16:39:14.079: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 16:39:16.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979954, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979954, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979954, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979954, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 16:39:19.221: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:39:19.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4055" for this suite. STEP: Destroying namespace "webhook-4055-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:22.091 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":309,"completed":68,"skipped":1299,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:39:19.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's args Jan 11 16:39:19.563: INFO: Waiting up to 5m0s for pod "var-expansion-8b27a739-47f3-4aeb-9b9c-6b04279ed421" in namespace "var-expansion-2944" to be "Succeeded or Failed" Jan 11 16:39:19.580: INFO: Pod "var-expansion-8b27a739-47f3-4aeb-9b9c-6b04279ed421": Phase="Pending", Reason="", readiness=false. Elapsed: 17.105287ms Jan 11 16:39:21.684: INFO: Pod "var-expansion-8b27a739-47f3-4aeb-9b9c-6b04279ed421": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12076476s Jan 11 16:39:23.692: INFO: Pod "var-expansion-8b27a739-47f3-4aeb-9b9c-6b04279ed421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128937369s STEP: Saw pod success Jan 11 16:39:23.692: INFO: Pod "var-expansion-8b27a739-47f3-4aeb-9b9c-6b04279ed421" satisfied condition "Succeeded or Failed" Jan 11 16:39:23.697: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-8b27a739-47f3-4aeb-9b9c-6b04279ed421 container dapi-container: STEP: delete the pod Jan 11 16:39:23.742: INFO: Waiting for pod var-expansion-8b27a739-47f3-4aeb-9b9c-6b04279ed421 to disappear Jan 11 16:39:23.754: INFO: Pod var-expansion-8b27a739-47f3-4aeb-9b9c-6b04279ed421 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:39:23.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2944" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":309,"completed":69,"skipped":1308,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:39:23.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 11 16:39:23.919: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the sample API server. Jan 11 16:39:35.905: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 11 16:39:38.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979975, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979975, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979975, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745979975, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 16:39:41.845: INFO: Waited 1.12582397s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:39:42.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3694" for this suite. • [SLOW TEST:18.870 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":309,"completed":70,"skipped":1315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:39:42.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4461 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4461 I0111 16:39:43.081802 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4461, replica count: 2 I0111 16:39:46.133685 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:39:49.134535 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 16:39:49.134: INFO: Creating new exec pod Jan 11 16:39:54.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4461 exec execpodl9qxm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 11 16:39:55.828: INFO: stderr: "I0111 16:39:55.691352 422 log.go:181] (0x27fd6c0) (0x27fd730) Create stream\nI0111 16:39:55.694202 422 log.go:181] (0x27fd6c0) (0x27fd730) Stream added, broadcasting: 1\nI0111 16:39:55.713798 422 log.go:181] (0x27fd6c0) Reply frame received for 1\nI0111 16:39:55.714334 422 log.go:181] (0x27fd6c0) (0x2c520e0) Create stream\nI0111 16:39:55.714417 422 log.go:181] (0x27fd6c0) (0x2c520e0) Stream added, broadcasting: 3\nI0111 16:39:55.717559 422 log.go:181] (0x27fd6c0) Reply frame received for 3\nI0111 16:39:55.717805 422 log.go:181] (0x27fd6c0) (0x27fc070) Create stream\nI0111 16:39:55.717885 422 log.go:181] (0x27fd6c0) (0x27fc070) Stream added, broadcasting: 5\nI0111 16:39:55.718965 422 log.go:181] (0x27fd6c0) Reply frame received for 5\nI0111 16:39:55.810752 422 log.go:181] (0x27fd6c0) Data frame received for 3\nI0111 16:39:55.811069 422 log.go:181] (0x27fd6c0) Data frame received for 5\nI0111 16:39:55.811324 422 log.go:181] (0x27fc070) (5) Data frame handling\nI0111 16:39:55.811632 422 log.go:181] (0x2c520e0) (3) Data frame handling\nI0111 16:39:55.812539 422 log.go:181] (0x27fd6c0) Data frame received for 1\nI0111 16:39:55.812634 422 log.go:181] (0x27fd730) (1) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0111 16:39:55.814328 422 log.go:181] (0x27fc070) (5) Data frame sent\nI0111 16:39:55.814533 422 log.go:181] (0x27fd730) (1) Data frame sent\nI0111 16:39:55.814654 422 log.go:181] (0x27fd6c0) Data frame received for 5\nI0111 16:39:55.814750 422 log.go:181] (0x27fc070) (5) Data frame handling\nI0111 16:39:55.814877 422 log.go:181] (0x27fc070) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0111 16:39:55.814976 422 log.go:181] (0x27fd6c0) Data frame received for 5\nI0111 16:39:55.816010 422 log.go:181] (0x27fd6c0) (0x27fd730) Stream removed, broadcasting: 1\nI0111 16:39:55.817311 422 log.go:181] (0x27fc070) (5) Data frame handling\nI0111 16:39:55.817704 422 log.go:181] (0x27fd6c0) Go away received\nI0111 16:39:55.819729 422 log.go:181] (0x27fd6c0) (0x27fd730) Stream removed, broadcasting: 1\nI0111 16:39:55.819933 422 log.go:181] (0x27fd6c0) (0x2c520e0) Stream removed, broadcasting: 3\nI0111 16:39:55.820378 422 log.go:181] (0x27fd6c0) (0x27fc070) Stream removed, broadcasting: 5\n" Jan 11 16:39:55.829: INFO: stdout: "" Jan 11 16:39:55.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4461 exec execpodl9qxm -- /bin/sh -x -c nc -zv -t -w 2 10.96.232.249 80' Jan 11 16:39:57.289: INFO: stderr: "I0111 16:39:57.140761 442 log.go:181] (0x302c000) (0x302c0e0) Create stream\nI0111 16:39:57.144484 442 log.go:181] (0x302c000) (0x302c0e0) Stream added, broadcasting: 1\nI0111 16:39:57.166559 442 log.go:181] (0x302c000) Reply frame received for 1\nI0111 16:39:57.167189 442 log.go:181] (0x302c000) (0x302c1c0) Create stream\nI0111 16:39:57.167278 442 log.go:181] (0x302c000) (0x302c1c0) Stream added, broadcasting: 3\nI0111 16:39:57.169068 442 log.go:181] (0x302c000) Reply frame received for 3\nI0111 16:39:57.169311 442 log.go:181] (0x302c000) (0x28d60e0) Create stream\nI0111 16:39:57.169380 442 log.go:181] (0x302c000) (0x28d60e0) Stream added, broadcasting: 5\nI0111 16:39:57.170616 442 log.go:181] (0x302c000) Reply frame received for 5\nI0111 16:39:57.271270 442 log.go:181] (0x302c000) Data frame received for 3\nI0111 16:39:57.271786 442 log.go:181] (0x302c1c0) (3) Data frame handling\nI0111 16:39:57.271937 442 log.go:181] (0x302c000) Data frame received for 5\nI0111 16:39:57.272155 442 log.go:181] (0x28d60e0) (5) Data frame handling\nI0111 16:39:57.272511 442 log.go:181] (0x302c000) Data frame received for 1\nI0111 16:39:57.272782 442 log.go:181] (0x302c0e0) (1) Data frame handling\nI0111 16:39:57.274238 442 log.go:181] (0x302c0e0) (1) Data frame sent\n+ nc -zv -t -w 2 10.96.232.249 80\nConnection to 10.96.232.249 80 port [tcp/http] succeeded!\nI0111 16:39:57.275568 442 log.go:181] (0x28d60e0) (5) Data frame sent\nI0111 16:39:57.275653 442 log.go:181] (0x302c000) Data frame received for 5\nI0111 16:39:57.275714 442 log.go:181] (0x28d60e0) (5) Data frame handling\nI0111 16:39:57.276692 442 log.go:181] (0x302c000) (0x302c0e0) Stream removed, broadcasting: 1\nI0111 16:39:57.279097 442 log.go:181] (0x302c000) Go away received\nI0111 16:39:57.281329 442 log.go:181] (0x302c000) (0x302c0e0) Stream removed, broadcasting: 1\nI0111 16:39:57.281510 442 log.go:181] (0x302c000) (0x302c1c0) Stream removed, broadcasting: 3\nI0111 16:39:57.281647 442 log.go:181] (0x302c000) (0x28d60e0) Stream removed, broadcasting: 5\n" Jan 11 16:39:57.290: INFO: stdout: "" Jan 11 16:39:57.290: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4461 exec execpodl9qxm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30970' Jan 11 16:39:58.755: INFO: stderr: "I0111 16:39:58.613351 462 log.go:181] (0x26cc770) (0x26cc930) Create stream\nI0111 16:39:58.615627 462 log.go:181] (0x26cc770) (0x26cc930) Stream added, broadcasting: 1\nI0111 16:39:58.634567 462 log.go:181] (0x26cc770) Reply frame received for 1\nI0111 16:39:58.635155 462 log.go:181] (0x26cc770) (0x2b640e0) Create stream\nI0111 16:39:58.635243 462 log.go:181] (0x26cc770) (0x2b640e0) Stream added, broadcasting: 3\nI0111 16:39:58.636961 462 log.go:181] (0x26cc770) Reply frame received for 3\nI0111 16:39:58.637198 462 log.go:181] (0x26cc770) (0x297c0e0) Create stream\nI0111 16:39:58.637288 462 log.go:181] (0x26cc770) (0x297c0e0) Stream added, broadcasting: 5\nI0111 16:39:58.638481 462 log.go:181] (0x26cc770) Reply frame received for 5\nI0111 16:39:58.737964 462 log.go:181] (0x26cc770) Data frame received for 3\nI0111 16:39:58.738190 462 log.go:181] (0x26cc770) Data frame received for 1\nI0111 16:39:58.738668 462 log.go:181] (0x26cc770) Data frame received for 5\nI0111 16:39:58.738944 462 log.go:181] (0x297c0e0) (5) Data frame handling\nI0111 16:39:58.739122 462 log.go:181] (0x26cc930) (1) Data frame handling\nI0111 16:39:58.739413 462 log.go:181] (0x2b640e0) (3) Data frame handling\nI0111 16:39:58.741971 462 log.go:181] (0x26cc930) (1) Data frame sent\nI0111 16:39:58.742526 462 log.go:181] (0x297c0e0) (5) Data frame sent\nI0111 16:39:58.742631 462 log.go:181] (0x26cc770) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.13 30970\nConnection to 172.18.0.13 30970 port [tcp/30970] succeeded!\nI0111 16:39:58.743051 462 log.go:181] (0x26cc770) (0x26cc930) Stream removed, broadcasting: 1\nI0111 16:39:58.743444 462 log.go:181] (0x297c0e0) (5) Data frame handling\nI0111 16:39:58.743925 462 log.go:181] (0x26cc770) Go away received\nI0111 16:39:58.745545 462 log.go:181] (0x26cc770) (0x26cc930) Stream removed, broadcasting: 1\nI0111 16:39:58.745926 462 log.go:181] (0x26cc770) (0x2b640e0) Stream removed, broadcasting: 3\nI0111 16:39:58.746171 462 log.go:181] (0x26cc770) (0x297c0e0) Stream removed, broadcasting: 5\n" Jan 11 16:39:58.756: INFO: stdout: "" Jan 11 16:39:58.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4461 exec execpodl9qxm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30970' Jan 11 16:40:00.199: INFO: stderr: "I0111 16:40:00.064670 483 log.go:181] (0x2890000) (0x2890070) Create stream\nI0111 16:40:00.067479 483 log.go:181] (0x2890000) (0x2890070) Stream added, broadcasting: 1\nI0111 16:40:00.077954 483 log.go:181] (0x2890000) Reply frame received for 1\nI0111 16:40:00.078762 483 log.go:181] (0x2890000) (0x27de070) Create stream\nI0111 16:40:00.078876 483 log.go:181] (0x2890000) (0x27de070) Stream added, broadcasting: 3\nI0111 16:40:00.080820 483 log.go:181] (0x2890000) Reply frame received for 3\nI0111 16:40:00.081312 483 log.go:181] (0x2890000) (0x2890850) Create stream\nI0111 16:40:00.081415 483 log.go:181] (0x2890000) (0x2890850) Stream added, broadcasting: 5\nI0111 16:40:00.092016 483 log.go:181] (0x2890000) Reply frame received for 5\nI0111 16:40:00.185446 483 log.go:181] (0x2890000) Data frame received for 3\nI0111 16:40:00.185754 483 log.go:181] (0x27de070) (3) Data frame handling\nI0111 16:40:00.185889 483 log.go:181] (0x2890000) Data frame received for 5\nI0111 16:40:00.186076 483 log.go:181] (0x2890850) (5) Data frame handling\nI0111 16:40:00.186257 483 log.go:181] (0x2890000) Data frame received for 1\nI0111 16:40:00.186415 483 log.go:181] (0x2890070) (1) Data frame handling\nI0111 16:40:00.187483 483 log.go:181] (0x2890850) (5) Data frame sent\nI0111 16:40:00.187694 483 log.go:181] (0x2890070) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30970\nConnection to 172.18.0.12 30970 port [tcp/30970] succeeded!\nI0111 16:40:00.188442 483 log.go:181] (0x2890000) Data frame received for 5\nI0111 16:40:00.188564 483 log.go:181] (0x2890850) (5) Data frame handling\nI0111 16:40:00.190183 483 log.go:181] (0x2890000) (0x2890070) Stream removed, broadcasting: 1\nI0111 16:40:00.190512 483 log.go:181] (0x2890000) Go away received\nI0111 16:40:00.192499 483 log.go:181] (0x2890000) (0x2890070) Stream removed, broadcasting: 1\nI0111 16:40:00.192678 483 log.go:181] (0x2890000) (0x27de070) Stream removed, broadcasting: 3\nI0111 16:40:00.192898 483 log.go:181] (0x2890000) (0x2890850) Stream removed, broadcasting: 5\n" Jan 11 16:40:00.200: INFO: stdout: "" Jan 11 16:40:00.200: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:40:00.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4461" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:17.647 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":309,"completed":71,"skipped":1351,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:40:00.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-5d9c5094-e395-4315-bbcf-8f6ccb54b9a0 STEP: Creating secret with name s-test-opt-upd-2d0f4298-eb82-4386-baa5-e9b36eaa5f4e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5d9c5094-e395-4315-bbcf-8f6ccb54b9a0 STEP: Updating secret s-test-opt-upd-2d0f4298-eb82-4386-baa5-e9b36eaa5f4e STEP: Creating secret with name s-test-opt-create-05577090-8d82-4ef1-879c-8a0fa5536bbb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:40:08.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9557" for this suite. • [SLOW TEST:8.414 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":72,"skipped":1363,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:40:08.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 11 16:40:13.566: INFO: Successfully updated pod "labelsupdatea685abb6-6865-44a1-8fa5-db72eb8356a5" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:40:17.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9662" for this suite. • [SLOW TEST:8.927 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":309,"completed":73,"skipped":1369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:40:17.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating replication controller my-hostname-basic-e870f1cd-892d-4bc0-b1ab-7f13b5407d26 Jan 11 16:40:17.821: INFO: Pod name my-hostname-basic-e870f1cd-892d-4bc0-b1ab-7f13b5407d26: Found 0 pods out of 1 Jan 11 16:40:22.829: INFO: Pod name my-hostname-basic-e870f1cd-892d-4bc0-b1ab-7f13b5407d26: Found 1 pods out of 1 Jan 11 16:40:22.830: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e870f1cd-892d-4bc0-b1ab-7f13b5407d26" are running Jan 11 16:40:22.835: INFO: Pod "my-hostname-basic-e870f1cd-892d-4bc0-b1ab-7f13b5407d26-s7v69" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-11 16:40:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-11 16:40:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-11 16:40:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-11 16:40:17 +0000 UTC Reason: Message:}]) Jan 11 16:40:22.839: INFO: Trying to dial the pod Jan 11 16:40:27.863: INFO: Controller my-hostname-basic-e870f1cd-892d-4bc0-b1ab-7f13b5407d26: Got expected result from replica 1 [my-hostname-basic-e870f1cd-892d-4bc0-b1ab-7f13b5407d26-s7v69]: "my-hostname-basic-e870f1cd-892d-4bc0-b1ab-7f13b5407d26-s7v69", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:40:27.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5735" for this suite. • [SLOW TEST:10.226 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":309,"completed":74,"skipped":1426,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:40:27.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:40:28.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1723" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":309,"completed":75,"skipped":1439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:40:28.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 11 16:40:28.163: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5641 838f668c-a0f6-43ee-853f-56bff19824e4 194292 0 2021-01-11 16:40:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-11 16:40:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 16:40:28.166: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5641 838f668c-a0f6-43ee-853f-56bff19824e4 194293 0 2021-01-11 16:40:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-11 16:40:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 16:40:28.167: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5641 838f668c-a0f6-43ee-853f-56bff19824e4 194294 0 2021-01-11 16:40:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-11 16:40:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 11 16:40:38.230: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5641 838f668c-a0f6-43ee-853f-56bff19824e4 194339 0 2021-01-11 16:40:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-11 16:40:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 16:40:38.231: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5641 838f668c-a0f6-43ee-853f-56bff19824e4 194340 0 2021-01-11 16:40:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-11 16:40:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 16:40:38.233: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5641 838f668c-a0f6-43ee-853f-56bff19824e4 194341 0 2021-01-11 16:40:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-11 16:40:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:40:38.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5641" for this suite. • [SLOW TEST:10.252 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":309,"completed":76,"skipped":1463,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:40:38.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in volume subpath Jan 11 16:40:38.436: INFO: Waiting up to 5m0s for pod "var-expansion-053fc292-6551-4b3b-9e33-0b52bb14e0ff" in namespace "var-expansion-9858" to be "Succeeded or Failed" Jan 11 16:40:38.444: INFO: Pod "var-expansion-053fc292-6551-4b3b-9e33-0b52bb14e0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 7.757881ms Jan 11 16:40:40.454: INFO: Pod "var-expansion-053fc292-6551-4b3b-9e33-0b52bb14e0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017179094s Jan 11 16:40:42.471: INFO: Pod "var-expansion-053fc292-6551-4b3b-9e33-0b52bb14e0ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034166287s STEP: Saw pod success Jan 11 16:40:42.471: INFO: Pod "var-expansion-053fc292-6551-4b3b-9e33-0b52bb14e0ff" satisfied condition "Succeeded or Failed" Jan 11 16:40:42.481: INFO: Trying to get logs from node leguer-worker pod var-expansion-053fc292-6551-4b3b-9e33-0b52bb14e0ff container dapi-container: STEP: delete the pod Jan 11 16:40:42.513: INFO: Waiting for pod var-expansion-053fc292-6551-4b3b-9e33-0b52bb14e0ff to disappear Jan 11 16:40:42.539: INFO: Pod var-expansion-053fc292-6551-4b3b-9e33-0b52bb14e0ff no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:40:42.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9858" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":309,"completed":77,"skipped":1463,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:40:42.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:41:14.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1765" for this suite. • [SLOW TEST:31.831 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":309,"completed":78,"skipped":1466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:41:14.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:41:14.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c" in namespace "projected-776" to be "Succeeded or Failed" Jan 11 16:41:14.521: INFO: Pod "downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.073019ms Jan 11 16:41:16.529: INFO: Pod "downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013156473s Jan 11 16:41:18.539: INFO: Pod "downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023083265s Jan 11 16:41:20.548: INFO: Pod "downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031618076s STEP: Saw pod success Jan 11 16:41:20.548: INFO: Pod "downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c" satisfied condition "Succeeded or Failed" Jan 11 16:41:20.553: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c container client-container: STEP: delete the pod Jan 11 16:41:20.588: INFO: Waiting for pod downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c to disappear Jan 11 16:41:20.602: INFO: Pod downwardapi-volume-d0f65925-893c-4d61-b653-d6a06b41c70c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:41:20.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-776" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":79,"skipped":1496,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:41:20.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Jan 11 16:41:20.786: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:41:20.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1662" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":309,"completed":80,"skipped":1508,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:41:20.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 11 16:41:20.918: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 11 16:42:52.125: INFO: >>> kubeConfig: /root/.kube/config Jan 11 16:43:14.827: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:44:45.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9438" for this suite. • [SLOW TEST:204.659 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":309,"completed":81,"skipped":1510,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:44:45.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-3350 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 16:44:45.572: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 16:44:45.621: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 16:44:47.710: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 16:44:49.630: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 16:44:51.631: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:44:53.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:44:55.635: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:44:57.631: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:44:59.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:45:01.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:45:03.631: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:45:05.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 16:45:07.631: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 11 16:45:07.642: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 11 16:45:11.688: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 11 16:45:11.688: INFO: Breadth first check of 10.244.2.233 on host 172.18.0.13... Jan 11 16:45:11.692: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:9080/dial?request=hostname&protocol=http&host=10.244.2.233&port=8080&tries=1'] Namespace:pod-network-test-3350 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:45:11.693: INFO: >>> kubeConfig: /root/.kube/config I0111 16:45:11.808404 10 log.go:181] (0xb4aa540) (0xb4aa5b0) Create stream I0111 16:45:11.808613 10 log.go:181] (0xb4aa540) (0xb4aa5b0) Stream added, broadcasting: 1 I0111 16:45:11.812612 10 log.go:181] (0xb4aa540) Reply frame received for 1 I0111 16:45:11.813028 10 log.go:181] (0xb4aa540) (0xbd80070) Create stream I0111 16:45:11.813229 10 log.go:181] (0xb4aa540) (0xbd80070) Stream added, broadcasting: 3 I0111 16:45:11.815136 10 log.go:181] (0xb4aa540) Reply frame received for 3 I0111 16:45:11.815328 10 log.go:181] (0xb4aa540) (0xb4aa770) Create stream I0111 16:45:11.815438 10 log.go:181] (0xb4aa540) (0xb4aa770) Stream added, broadcasting: 5 I0111 16:45:11.817153 10 log.go:181] (0xb4aa540) Reply frame received for 5 I0111 16:45:11.894341 10 log.go:181] (0xb4aa540) Data frame received for 3 I0111 16:45:11.894580 10 log.go:181] (0xbd80070) (3) Data frame handling I0111 16:45:11.894794 10 log.go:181] (0xbd80070) (3) Data frame sent I0111 16:45:11.895000 10 log.go:181] (0xb4aa540) Data frame received for 3 I0111 16:45:11.895275 10 log.go:181] (0xb4aa540) Data frame received for 5 I0111 16:45:11.895516 10 log.go:181] (0xb4aa770) (5) Data frame handling I0111 16:45:11.895670 10 log.go:181] (0xbd80070) (3) Data frame handling I0111 16:45:11.896271 10 log.go:181] (0xb4aa540) Data frame received for 1 I0111 16:45:11.896456 10 log.go:181] (0xb4aa5b0) (1) Data frame handling I0111 16:45:11.896661 10 log.go:181] (0xb4aa5b0) (1) Data frame sent I0111 16:45:11.897073 10 log.go:181] (0xb4aa540) (0xb4aa5b0) Stream removed, broadcasting: 1 I0111 16:45:11.897299 10 log.go:181] (0xb4aa540) Go away received I0111 16:45:11.897776 10 log.go:181] (0xb4aa540) (0xb4aa5b0) Stream removed, broadcasting: 1 I0111 16:45:11.897914 10 log.go:181] (0xb4aa540) (0xbd80070) Stream removed, broadcasting: 3 I0111 16:45:11.898027 10 log.go:181] (0xb4aa540) (0xb4aa770) Stream removed, broadcasting: 5 Jan 11 16:45:11.898: INFO: Waiting for responses: map[] Jan 11 16:45:11.898: INFO: reached 10.244.2.233 after 0/1 tries Jan 11 16:45:11.898: INFO: Breadth first check of 10.244.1.247 on host 172.18.0.12... Jan 11 16:45:11.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:9080/dial?request=hostname&protocol=http&host=10.244.1.247&port=8080&tries=1'] Namespace:pod-network-test-3350 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 16:45:11.915: INFO: >>> kubeConfig: /root/.kube/config I0111 16:45:12.018653 10 log.go:181] (0xbd80540) (0xbd805b0) Create stream I0111 16:45:12.018771 10 log.go:181] (0xbd80540) (0xbd805b0) Stream added, broadcasting: 1 I0111 16:45:12.022028 10 log.go:181] (0xbd80540) Reply frame received for 1 I0111 16:45:12.022174 10 log.go:181] (0xbd80540) (0xb196b60) Create stream I0111 16:45:12.022247 10 log.go:181] (0xbd80540) (0xb196b60) Stream added, broadcasting: 3 I0111 16:45:12.023342 10 log.go:181] (0xbd80540) Reply frame received for 3 I0111 16:45:12.023468 10 log.go:181] (0xbd80540) (0xbcd0380) Create stream I0111 16:45:12.023549 10 log.go:181] (0xbd80540) (0xbcd0380) Stream added, broadcasting: 5 I0111 16:45:12.025159 10 log.go:181] (0xbd80540) Reply frame received for 5 I0111 16:45:12.096585 10 log.go:181] (0xbd80540) Data frame received for 3 I0111 16:45:12.096722 10 log.go:181] (0xb196b60) (3) Data frame handling I0111 16:45:12.096931 10 log.go:181] (0xb196b60) (3) Data frame sent I0111 16:45:12.097286 10 log.go:181] (0xbd80540) Data frame received for 3 I0111 16:45:12.097395 10 log.go:181] (0xb196b60) (3) Data frame handling I0111 16:45:12.097519 10 log.go:181] (0xbd80540) Data frame received for 5 I0111 16:45:12.097680 10 log.go:181] (0xbcd0380) (5) Data frame handling I0111 16:45:12.098760 10 log.go:181] (0xbd80540) Data frame received for 1 I0111 16:45:12.098907 10 log.go:181] (0xbd805b0) (1) Data frame handling I0111 16:45:12.099025 10 log.go:181] (0xbd805b0) (1) Data frame sent I0111 16:45:12.099129 10 log.go:181] (0xbd80540) (0xbd805b0) Stream removed, broadcasting: 1 I0111 16:45:12.099256 10 log.go:181] (0xbd80540) Go away received I0111 16:45:12.099681 10 log.go:181] (0xbd80540) (0xbd805b0) Stream removed, broadcasting: 1 I0111 16:45:12.099910 10 log.go:181] (0xbd80540) (0xb196b60) Stream removed, broadcasting: 3 I0111 16:45:12.100096 10 log.go:181] (0xbd80540) (0xbcd0380) Stream removed, broadcasting: 5 Jan 11 16:45:12.100: INFO: Waiting for responses: map[] Jan 11 16:45:12.100: INFO: reached 10.244.1.247 after 0/1 tries Jan 11 16:45:12.100: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:45:12.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3350" for this suite. • [SLOW TEST:26.633 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":309,"completed":82,"skipped":1514,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:45:12.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:45:12.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03dffbc5-606c-4a92-9e51-8e6b0ef90e5b" in namespace "projected-8613" to be "Succeeded or Failed" Jan 11 16:45:12.264: INFO: Pod "downwardapi-volume-03dffbc5-606c-4a92-9e51-8e6b0ef90e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.675101ms Jan 11 16:45:14.275: INFO: Pod "downwardapi-volume-03dffbc5-606c-4a92-9e51-8e6b0ef90e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048311275s Jan 11 16:45:16.283: INFO: Pod "downwardapi-volume-03dffbc5-606c-4a92-9e51-8e6b0ef90e5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055835608s STEP: Saw pod success Jan 11 16:45:16.283: INFO: Pod "downwardapi-volume-03dffbc5-606c-4a92-9e51-8e6b0ef90e5b" satisfied condition "Succeeded or Failed" Jan 11 16:45:16.288: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-03dffbc5-606c-4a92-9e51-8e6b0ef90e5b container client-container: STEP: delete the pod Jan 11 16:45:16.595: INFO: Waiting for pod downwardapi-volume-03dffbc5-606c-4a92-9e51-8e6b0ef90e5b to disappear Jan 11 16:45:16.639: INFO: Pod downwardapi-volume-03dffbc5-606c-4a92-9e51-8e6b0ef90e5b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:45:16.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8613" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":309,"completed":83,"skipped":1514,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:45:16.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating server pod server in namespace prestop-7314 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7314 STEP: Deleting pre-stop pod Jan 11 16:45:29.912: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:45:29.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7314" for this suite. • [SLOW TEST:13.339 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":309,"completed":84,"skipped":1525,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:45:29.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 11 16:45:39.452: INFO: Successfully updated pod "adopt-release-dnb5r" STEP: Checking that the Job readopts the Pod Jan 11 16:45:39.453: INFO: Waiting up to 15m0s for pod "adopt-release-dnb5r" in namespace "job-77" to be "adopted" Jan 11 16:45:39.458: INFO: Pod "adopt-release-dnb5r": Phase="Running", Reason="", readiness=true. Elapsed: 5.040128ms Jan 11 16:45:41.487: INFO: Pod "adopt-release-dnb5r": Phase="Running", Reason="", readiness=true. Elapsed: 2.034055107s Jan 11 16:45:41.488: INFO: Pod "adopt-release-dnb5r" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 11 16:45:42.008: INFO: Successfully updated pod "adopt-release-dnb5r" STEP: Checking that the Job releases the Pod Jan 11 16:45:42.009: INFO: Waiting up to 15m0s for pod "adopt-release-dnb5r" in namespace "job-77" to be "released" Jan 11 16:45:42.027: INFO: Pod "adopt-release-dnb5r": Phase="Running", Reason="", readiness=true. Elapsed: 17.87332ms Jan 11 16:45:44.034: INFO: Pod "adopt-release-dnb5r": Phase="Running", Reason="", readiness=true. Elapsed: 2.02505343s Jan 11 16:45:44.034: INFO: Pod "adopt-release-dnb5r" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:45:44.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-77" for this suite. • [SLOW TEST:14.050 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":309,"completed":85,"skipped":1542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:45:44.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:45:44.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a638e2e-25ad-46fd-bff7-d9e160bcf7f7" in namespace "projected-5168" to be "Succeeded or Failed" Jan 11 16:45:44.421: INFO: Pod "downwardapi-volume-1a638e2e-25ad-46fd-bff7-d9e160bcf7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.436388ms Jan 11 16:45:46.427: INFO: Pod "downwardapi-volume-1a638e2e-25ad-46fd-bff7-d9e160bcf7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031269637s Jan 11 16:45:48.435: INFO: Pod "downwardapi-volume-1a638e2e-25ad-46fd-bff7-d9e160bcf7f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038410089s STEP: Saw pod success Jan 11 16:45:48.435: INFO: Pod "downwardapi-volume-1a638e2e-25ad-46fd-bff7-d9e160bcf7f7" satisfied condition "Succeeded or Failed" Jan 11 16:45:48.441: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-1a638e2e-25ad-46fd-bff7-d9e160bcf7f7 container client-container: STEP: delete the pod Jan 11 16:45:48.492: INFO: Waiting for pod downwardapi-volume-1a638e2e-25ad-46fd-bff7-d9e160bcf7f7 to disappear Jan 11 16:45:48.498: INFO: Pod downwardapi-volume-1a638e2e-25ad-46fd-bff7-d9e160bcf7f7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:45:48.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5168" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":86,"skipped":1572,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:45:48.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-hxmm STEP: Creating a pod to test atomic-volume-subpath Jan 11 16:45:48.614: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hxmm" in namespace "subpath-2182" to be "Succeeded or Failed" Jan 11 16:45:48.649: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Pending", Reason="", readiness=false. Elapsed: 35.14476ms Jan 11 16:45:50.656: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041903177s Jan 11 16:45:52.668: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 4.05333401s Jan 11 16:45:54.676: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 6.061457668s Jan 11 16:45:56.686: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 8.071409522s Jan 11 16:45:58.694: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 10.080139858s Jan 11 16:46:00.704: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 12.090048684s Jan 11 16:46:02.715: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 14.100425925s Jan 11 16:46:04.725: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 16.111183204s Jan 11 16:46:06.737: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 18.122440528s Jan 11 16:46:08.743: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 20.128552031s Jan 11 16:46:10.749: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Running", Reason="", readiness=true. Elapsed: 22.134399832s Jan 11 16:46:12.756: INFO: Pod "pod-subpath-test-configmap-hxmm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.141853838s STEP: Saw pod success Jan 11 16:46:12.756: INFO: Pod "pod-subpath-test-configmap-hxmm" satisfied condition "Succeeded or Failed" Jan 11 16:46:12.763: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-configmap-hxmm container test-container-subpath-configmap-hxmm: STEP: delete the pod Jan 11 16:46:12.797: INFO: Waiting for pod pod-subpath-test-configmap-hxmm to disappear Jan 11 16:46:12.802: INFO: Pod pod-subpath-test-configmap-hxmm no longer exists STEP: Deleting pod pod-subpath-test-configmap-hxmm Jan 11 16:46:12.802: INFO: Deleting pod "pod-subpath-test-configmap-hxmm" in namespace "subpath-2182" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:46:12.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2182" for this suite. • [SLOW TEST:24.291 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":309,"completed":87,"skipped":1578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:46:12.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3316 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet Jan 11 16:46:12.956: INFO: Found 0 stateful pods, waiting for 3 Jan 11 16:46:23.037: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 16:46:23.037: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 16:46:23.037: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 11 16:46:32.966: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 16:46:32.966: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 16:46:32.966: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 11 16:46:33.017: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 11 16:46:43.086: INFO: Updating stateful set ss2 Jan 11 16:46:43.141: INFO: Waiting for Pod statefulset-3316/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jan 11 16:46:53.906: INFO: Found 2 stateful pods, waiting for 3 Jan 11 16:47:03.915: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 16:47:03.915: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 16:47:03.915: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 11 16:47:03.952: INFO: Updating stateful set ss2 Jan 11 16:47:03.993: INFO: Waiting for Pod statefulset-3316/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 16:47:14.035: INFO: Updating stateful set ss2 Jan 11 16:47:14.079: INFO: Waiting for StatefulSet statefulset-3316/ss2 to complete update Jan 11 16:47:14.079: INFO: Waiting for Pod statefulset-3316/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 16:47:24.092: INFO: Waiting for StatefulSet statefulset-3316/ss2 to complete update Jan 11 16:47:24.093: INFO: Waiting for Pod statefulset-3316/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 11 16:47:34.095: INFO: Deleting all statefulset in ns statefulset-3316 Jan 11 16:47:34.100: INFO: Scaling statefulset ss2 to 0 Jan 11 16:48:34.186: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 16:48:34.192: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:48:34.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3316" for this suite. • [SLOW TEST:141.419 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":309,"completed":88,"skipped":1608,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:48:34.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:48:34.319: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 16:48:56.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8395 --namespace=crd-publish-openapi-8395 create -f -' Jan 11 16:49:03.401: INFO: stderr: "" Jan 11 16:49:03.401: INFO: stdout: "e2e-test-crd-publish-openapi-945-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 11 16:49:03.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8395 --namespace=crd-publish-openapi-8395 delete e2e-test-crd-publish-openapi-945-crds test-cr' Jan 11 16:49:04.642: INFO: stderr: "" Jan 11 16:49:04.643: INFO: stdout: "e2e-test-crd-publish-openapi-945-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 11 16:49:04.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8395 --namespace=crd-publish-openapi-8395 apply -f -' Jan 11 16:49:07.518: INFO: stderr: "" Jan 11 16:49:07.518: INFO: stdout: "e2e-test-crd-publish-openapi-945-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 11 16:49:07.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8395 --namespace=crd-publish-openapi-8395 delete e2e-test-crd-publish-openapi-945-crds test-cr' Jan 11 16:49:08.757: INFO: stderr: "" Jan 11 16:49:08.757: INFO: stdout: "e2e-test-crd-publish-openapi-945-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 11 16:49:08.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8395 explain e2e-test-crd-publish-openapi-945-crds' Jan 11 16:49:11.866: INFO: stderr: "" Jan 11 16:49:11.867: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-945-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:49:34.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8395" for this suite. • [SLOW TEST:60.195 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":309,"completed":89,"skipped":1609,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:49:34.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 11 16:49:34.579: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:34.593: INFO: Number of nodes with available pods: 0 Jan 11 16:49:34.593: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:49:35.604: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:35.646: INFO: Number of nodes with available pods: 0 Jan 11 16:49:35.646: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:49:36.605: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:36.611: INFO: Number of nodes with available pods: 0 Jan 11 16:49:36.611: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:49:37.619: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:37.646: INFO: Number of nodes with available pods: 0 Jan 11 16:49:37.646: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:49:38.606: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:38.612: INFO: Number of nodes with available pods: 1 Jan 11 16:49:38.613: INFO: Node leguer-worker is running more than one daemon pod Jan 11 16:49:39.618: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:39.624: INFO: Number of nodes with available pods: 2 Jan 11 16:49:39.624: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 11 16:49:39.703: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:39.710: INFO: Number of nodes with available pods: 1 Jan 11 16:49:39.710: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:40.725: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:40.735: INFO: Number of nodes with available pods: 1 Jan 11 16:49:40.735: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:41.722: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:41.729: INFO: Number of nodes with available pods: 1 Jan 11 16:49:41.729: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:42.724: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:42.731: INFO: Number of nodes with available pods: 1 Jan 11 16:49:42.731: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:43.721: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:43.753: INFO: Number of nodes with available pods: 1 Jan 11 16:49:43.753: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:44.727: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:44.734: INFO: Number of nodes with available pods: 1 Jan 11 16:49:44.734: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:45.732: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:45.738: INFO: Number of nodes with available pods: 1 Jan 11 16:49:45.738: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:46.724: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:46.732: INFO: Number of nodes with available pods: 1 Jan 11 16:49:46.732: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:47.727: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:47.733: INFO: Number of nodes with available pods: 1 Jan 11 16:49:47.734: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:48.723: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:48.730: INFO: Number of nodes with available pods: 1 Jan 11 16:49:48.730: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:49.722: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:49.748: INFO: Number of nodes with available pods: 1 Jan 11 16:49:49.748: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:50.725: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:50.731: INFO: Number of nodes with available pods: 1 Jan 11 16:49:50.731: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:51.722: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:51.728: INFO: Number of nodes with available pods: 1 Jan 11 16:49:51.728: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:52.724: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:52.732: INFO: Number of nodes with available pods: 1 Jan 11 16:49:52.732: INFO: Node leguer-worker2 is running more than one daemon pod Jan 11 16:49:53.721: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 16:49:53.728: INFO: Number of nodes with available pods: 2 Jan 11 16:49:53.728: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4723, will wait for the garbage collector to delete the pods Jan 11 16:49:53.797: INFO: Deleting DaemonSet.extensions daemon-set took: 8.417518ms Jan 11 16:49:54.398: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.776228ms Jan 11 16:50:00.206: INFO: Number of nodes with available pods: 0 Jan 11 16:50:00.207: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 16:50:00.238: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"196285"},"items":null} Jan 11 16:50:00.243: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"196285"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:50:00.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4723" for this suite. • [SLOW TEST:25.847 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":309,"completed":90,"skipped":1622,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:50:00.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 11 16:50:04.510: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:50:04.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3809" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":91,"skipped":1638,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:50:04.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 16:50:04.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3144 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 11 16:50:05.900: INFO: stderr: "" Jan 11 16:50:05.900: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 11 16:50:10.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3144 get pod e2e-test-httpd-pod -o json' Jan 11 16:50:12.119: INFO: stderr: "" Jan 11 16:50:12.119: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-01-11T16:50:05Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-01-11T16:50:05Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.6\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-01-11T16:50:08Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3144\",\n \"resourceVersion\": \"196355\",\n \"uid\": \"3d5e631d-29a1-4fa4-a46a-6c222e85040e\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9sjpd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9sjpd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9sjpd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-11T16:50:05Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-11T16:50:08Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-11T16:50:08Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-11T16:50:05Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f25904c18ded73c77a301a7396b393dd2d5283f3774e9dc9ee526e27058ba817\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-01-11T16:50:08Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.6\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.6\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-01-11T16:50:05Z\"\n }\n}\n" STEP: replace the image in the pod Jan 11 16:50:12.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3144 replace -f -' Jan 11 16:50:14.757: INFO: stderr: "" Jan 11 16:50:14.757: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Jan 11 16:50:14.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3144 delete pods e2e-test-httpd-pod' Jan 11 16:50:29.837: INFO: stderr: "" Jan 11 16:50:29.837: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:50:29.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3144" for this suite. • [SLOW TEST:25.287 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1551 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":309,"completed":92,"skipped":1641,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:50:29.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Jan 11 16:50:30.004: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 16:50:30.004: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 16:50:30.034: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 16:50:30.035: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 16:50:30.071: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 16:50:30.071: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 16:50:30.214: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 16:50:30.214: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 16:50:33.555: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 11 16:50:33.555: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 11 16:50:34.879: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Jan 11 16:50:34.897: INFO: observed event type ADDED STEP: waiting for Replicas to scale Jan 11 16:50:34.902: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 Jan 11 16:50:34.902: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 Jan 11 16:50:34.903: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 Jan 11 16:50:34.903: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 Jan 11 16:50:34.903: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 Jan 11 16:50:34.903: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 Jan 11 16:50:34.903: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 Jan 11 16:50:34.903: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 0 Jan 11 16:50:34.904: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:34.904: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:34.904: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:34.904: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:34.905: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:34.905: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:34.950: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:34.950: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:35.017: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:35.017: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:35.098: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:35.098: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 2 Jan 11 16:50:35.187: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 STEP: listing Deployments Jan 11 16:50:35.197: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Jan 11 16:50:35.443: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Jan 11 16:50:35.551: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 16:50:35.621: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 16:50:35.716: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 16:50:35.800: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 16:50:35.879: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 16:50:35.909: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 16:50:36.307: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 16:50:36.696: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Jan 11 16:50:41.498: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:41.499: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:41.499: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:41.499: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:41.499: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:41.500: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:41.500: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 Jan 11 16:50:41.500: INFO: observed Deployment test-deployment in namespace deployment-7088 with ReadyReplicas 1 STEP: deleting the Deployment Jan 11 16:50:42.066: INFO: observed event type MODIFIED Jan 11 16:50:42.067: INFO: observed event type MODIFIED Jan 11 16:50:42.067: INFO: observed event type MODIFIED Jan 11 16:50:42.067: INFO: observed event type MODIFIED Jan 11 16:50:42.068: INFO: observed event type MODIFIED Jan 11 16:50:42.069: INFO: observed event type MODIFIED Jan 11 16:50:42.069: INFO: observed event type MODIFIED Jan 11 16:50:42.070: INFO: observed event type MODIFIED Jan 11 16:50:42.070: INFO: observed event type MODIFIED Jan 11 16:50:42.071: INFO: observed event type MODIFIED Jan 11 16:50:42.072: INFO: observed event type MODIFIED Jan 11 16:50:42.072: INFO: observed event type MODIFIED Jan 11 16:50:42.072: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 16:50:42.141: INFO: Log out all the ReplicaSets if there is no deployment created Jan 11 16:50:42.175: INFO: ReplicaSet "test-deployment-768947d6f5": &ReplicaSet{ObjectMeta:{test-deployment-768947d6f5 deployment-7088 de3872f5-8d6a-43d4-ad71-18dd28982645 196572 3 2021-01-11 16:50:35 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 4a1d50a2-b669-4bd8-84dd-8786e14ca8ab 0x915d5c7 0x915d5c8}] [] [{kube-controller-manager Update apps/v1 2021-01-11 16:50:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a1d50a2-b669-4bd8-84dd-8786e14ca8ab\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 768947d6f5,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x915d630 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 16:50:42.188: INFO: pod: "test-deployment-768947d6f5-5cbtx": &Pod{ObjectMeta:{test-deployment-768947d6f5-5cbtx test-deployment-768947d6f5- deployment-7088 76617c78-2946-4806-bd02-150fa856e369 196578 0 2021-01-11 16:50:41 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 de3872f5-8d6a-43d4-ad71-18dd28982645 0x866b7c7 0x866b7c8}] [] [{kube-controller-manager Update v1 2021-01-11 16:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de3872f5-8d6a-43d4-ad71-18dd28982645\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 16:50:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rnwgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rnwgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rnwgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 16:50:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 16:50:42.190: INFO: pod: "test-deployment-768947d6f5-6ztmf": &Pod{ObjectMeta:{test-deployment-768947d6f5-6ztmf test-deployment-768947d6f5- deployment-7088 9db6e39b-451c-4c7c-b437-5d33365d0059 196554 0 2021-01-11 16:50:35 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 de3872f5-8d6a-43d4-ad71-18dd28982645 0x866b957 0x866b958}] [] [{kube-controller-manager Update v1 2021-01-11 16:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de3872f5-8d6a-43d4-ad71-18dd28982645\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 16:50:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.246\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rnwgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rnwgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rnwgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.246,StartTime:2021-01-11 16:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 16:50:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9406f2e083cba3d99caf3226f0ffe7dd6340b37bcb0d1e2aee634cdd4ea2943f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 16:50:42.192: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-7088 8f7034bb-d731-44f5-8d16-e6f5df30f66e 196576 4 2021-01-11 16:50:34 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 4a1d50a2-b669-4bd8-84dd-8786e14ca8ab 0x915d697 0x915d698}] [] [{kube-controller-manager Update apps/v1 2021-01-11 16:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a1d50a2-b669-4bd8-84dd-8786e14ca8ab\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x915d718 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 16:50:42.197: INFO: ReplicaSet "test-deployment-8b6954bfb": &ReplicaSet{ObjectMeta:{test-deployment-8b6954bfb deployment-7088 fb348cf9-a921-46e0-8834-eae1df9af3a2 196489 2 2021-01-11 16:50:29 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 4a1d50a2-b669-4bd8-84dd-8786e14ca8ab 0x915d777 0x915d778}] [] [{kube-controller-manager Update apps/v1 2021-01-11 16:50:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a1d50a2-b669-4bd8-84dd-8786e14ca8ab\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8b6954bfb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x915d7e0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 16:50:42.205: INFO: pod: "test-deployment-8b6954bfb-c6bsf": &Pod{ObjectMeta:{test-deployment-8b6954bfb-c6bsf test-deployment-8b6954bfb- deployment-7088 eebeeca5-93b2-4f80-84de-95b0824522b9 196450 0 2021-01-11 16:50:30 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-8b6954bfb fb348cf9-a921-46e0-8834-eae1df9af3a2 0x915db87 0x915db88}] [] [{kube-controller-manager Update v1 2021-01-11 16:50:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb348cf9-a921-46e0-8834-eae1df9af3a2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 16:50:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.7\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rnwgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rnwgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rnwgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:50:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.7,StartTime:2021-01-11 16:50:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 16:50:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://a341d44941d77f67d82c1b99f49c4b0e4622501706184652dc12f8b92568030d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:50:42.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7088" for this suite. • [SLOW TEST:12.347 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":309,"completed":93,"skipped":1653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:50:42.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 11 16:50:42.340: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 16:51:42.434: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Jan 11 16:51:42.523: INFO: Created pod: pod0-sched-preemption-low-priority Jan 11 16:51:42.603: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:52:14.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7545" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:92.579 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":309,"completed":94,"skipped":1687,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:52:14.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:52:14.918: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb92d989-06f8-49fd-95fd-44b25b8ae626" in namespace "projected-4302" to be "Succeeded or Failed" Jan 11 16:52:14.955: INFO: Pod "downwardapi-volume-fb92d989-06f8-49fd-95fd-44b25b8ae626": Phase="Pending", Reason="", readiness=false. Elapsed: 36.474373ms Jan 11 16:52:16.961: INFO: Pod "downwardapi-volume-fb92d989-06f8-49fd-95fd-44b25b8ae626": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042659412s Jan 11 16:52:18.970: INFO: Pod "downwardapi-volume-fb92d989-06f8-49fd-95fd-44b25b8ae626": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052085636s STEP: Saw pod success Jan 11 16:52:18.970: INFO: Pod "downwardapi-volume-fb92d989-06f8-49fd-95fd-44b25b8ae626" satisfied condition "Succeeded or Failed" Jan 11 16:52:18.976: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-fb92d989-06f8-49fd-95fd-44b25b8ae626 container client-container: STEP: delete the pod Jan 11 16:52:19.013: INFO: Waiting for pod downwardapi-volume-fb92d989-06f8-49fd-95fd-44b25b8ae626 to disappear Jan 11 16:52:19.064: INFO: Pod downwardapi-volume-fb92d989-06f8-49fd-95fd-44b25b8ae626 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:52:19.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4302" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":309,"completed":95,"skipped":1706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:52:19.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:52:23.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7454" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":309,"completed":96,"skipped":1729,"failed":0} SSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:52:23.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 11 16:52:40.087: INFO: starting watch STEP: patching STEP: updating Jan 11 16:52:40.102: INFO: waiting for watch events with expected annotations Jan 11 16:52:40.103: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:52:40.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4896" for this suite. • [SLOW TEST:17.017 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":309,"completed":97,"skipped":1734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:52:40.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:52:40.427: INFO: Waiting up to 5m0s for pod "busybox-user-65534-71c5c5f8-5306-4d05-b2b4-0b09ceb3e1c6" in namespace "security-context-test-4462" to be "Succeeded or Failed" Jan 11 16:52:40.438: INFO: Pod "busybox-user-65534-71c5c5f8-5306-4d05-b2b4-0b09ceb3e1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.804317ms Jan 11 16:52:42.472: INFO: Pod "busybox-user-65534-71c5c5f8-5306-4d05-b2b4-0b09ceb3e1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044939807s Jan 11 16:52:44.479: INFO: Pod "busybox-user-65534-71c5c5f8-5306-4d05-b2b4-0b09ceb3e1c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052340447s Jan 11 16:52:44.479: INFO: Pod "busybox-user-65534-71c5c5f8-5306-4d05-b2b4-0b09ceb3e1c6" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:52:44.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4462" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":98,"skipped":1758,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:52:44.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:53:02.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2066" for this suite. • [SLOW TEST:18.218 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":309,"completed":99,"skipped":1766,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:53:02.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:53:19.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-467" for this suite. • [SLOW TEST:16.539 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":309,"completed":100,"skipped":1818,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:53:19.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name secret-emptykey-test-ca7ba0ff-91fc-44ec-bc75-be7ff7693121 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:53:19.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7570" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":309,"completed":101,"skipped":1824,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:53:19.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:53:50.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5983" for this suite. STEP: Destroying namespace "nsdeletetest-9626" for this suite. Jan 11 16:53:50.684: INFO: Namespace nsdeletetest-9626 was already deleted STEP: Destroying namespace "nsdeletetest-1470" for this suite. • [SLOW TEST:31.316 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":309,"completed":102,"skipped":1836,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:53:50.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 11 16:53:55.356: INFO: Successfully updated pod "annotationupdate49141340-4b87-4bf1-8928-1ff2307afa53" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:53:57.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3694" for this suite. • [SLOW TEST:6.719 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":309,"completed":103,"skipped":1843,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:53:57.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-downwardapi-qq9k STEP: Creating a pod to test atomic-volume-subpath Jan 11 16:53:57.557: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qq9k" in namespace "subpath-5250" to be "Succeeded or Failed" Jan 11 16:53:57.567: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Pending", Reason="", readiness=false. Elapsed: 9.39087ms Jan 11 16:53:59.575: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017857691s Jan 11 16:54:01.584: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 4.026569601s Jan 11 16:54:03.593: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 6.035341073s Jan 11 16:54:05.602: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 8.044374601s Jan 11 16:54:07.610: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 10.052674457s Jan 11 16:54:09.617: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 12.060202677s Jan 11 16:54:11.626: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 14.068821024s Jan 11 16:54:13.635: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 16.077860061s Jan 11 16:54:15.643: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 18.085458654s Jan 11 16:54:17.650: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 20.093084769s Jan 11 16:54:19.659: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Running", Reason="", readiness=true. Elapsed: 22.101289028s Jan 11 16:54:21.667: INFO: Pod "pod-subpath-test-downwardapi-qq9k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.109346231s STEP: Saw pod success Jan 11 16:54:21.667: INFO: Pod "pod-subpath-test-downwardapi-qq9k" satisfied condition "Succeeded or Failed" Jan 11 16:54:21.672: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-downwardapi-qq9k container test-container-subpath-downwardapi-qq9k: STEP: delete the pod Jan 11 16:54:21.766: INFO: Waiting for pod pod-subpath-test-downwardapi-qq9k to disappear Jan 11 16:54:21.808: INFO: Pod pod-subpath-test-downwardapi-qq9k no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-qq9k Jan 11 16:54:21.808: INFO: Deleting pod "pod-subpath-test-downwardapi-qq9k" in namespace "subpath-5250" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:54:21.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5250" for this suite. • [SLOW TEST:24.418 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":309,"completed":104,"skipped":1857,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:54:21.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4507 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4507 I0111 16:54:21.975409 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4507, replica count: 2 I0111 16:54:25.026988 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 16:54:28.028369 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 16:54:28.028: INFO: Creating new exec pod Jan 11 16:54:33.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4507 exec execpodktcbj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 11 16:54:34.544: INFO: stderr: "I0111 16:54:34.399621 687 log.go:181] (0x2841a40) (0x275a000) Create stream\nI0111 16:54:34.405146 687 log.go:181] (0x2841a40) (0x275a000) Stream added, broadcasting: 1\nI0111 16:54:34.429194 687 log.go:181] (0x2841a40) Reply frame received for 1\nI0111 16:54:34.429783 687 log.go:181] (0x2841a40) (0x2e1c1c0) Create stream\nI0111 16:54:34.429889 687 log.go:181] (0x2841a40) (0x2e1c1c0) Stream added, broadcasting: 3\nI0111 16:54:34.431072 687 log.go:181] (0x2841a40) Reply frame received for 3\nI0111 16:54:34.431324 687 log.go:181] (0x2841a40) (0x275a150) Create stream\nI0111 16:54:34.431419 687 log.go:181] (0x2841a40) (0x275a150) Stream added, broadcasting: 5\nI0111 16:54:34.432553 687 log.go:181] (0x2841a40) Reply frame received for 5\nI0111 16:54:34.528527 687 log.go:181] (0x2841a40) Data frame received for 3\nI0111 16:54:34.529019 687 log.go:181] (0x2841a40) Data frame received for 5\nI0111 16:54:34.529200 687 log.go:181] (0x275a150) (5) Data frame handling\nI0111 16:54:34.529342 687 log.go:181] (0x2e1c1c0) (3) Data frame handling\nI0111 16:54:34.529767 687 log.go:181] (0x2841a40) Data frame received for 1\nI0111 16:54:34.529876 687 log.go:181] (0x275a000) (1) Data frame handling\nI0111 16:54:34.530030 687 log.go:181] (0x275a150) (5) Data frame sent\nI0111 16:54:34.530116 687 log.go:181] (0x275a000) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0111 16:54:34.531069 687 log.go:181] (0x2841a40) Data frame received for 5\nI0111 16:54:34.531145 687 log.go:181] (0x275a150) (5) Data frame handling\nI0111 16:54:34.531902 687 log.go:181] (0x2841a40) (0x275a000) Stream removed, broadcasting: 1\nI0111 16:54:34.533809 687 log.go:181] (0x2841a40) Go away received\nI0111 16:54:34.536356 687 log.go:181] (0x2841a40) (0x275a000) Stream removed, broadcasting: 1\nI0111 16:54:34.536601 687 log.go:181] (0x2841a40) (0x2e1c1c0) Stream removed, broadcasting: 3\nI0111 16:54:34.536771 687 log.go:181] (0x2841a40) (0x275a150) Stream removed, broadcasting: 5\n" Jan 11 16:54:34.544: INFO: stdout: "" Jan 11 16:54:34.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4507 exec execpodktcbj -- /bin/sh -x -c nc -zv -t -w 2 10.96.133.37 80' Jan 11 16:54:36.050: INFO: stderr: "I0111 16:54:35.907330 707 log.go:181] (0x28375e0) (0x2837730) Create stream\nI0111 16:54:35.909746 707 log.go:181] (0x28375e0) (0x2837730) Stream added, broadcasting: 1\nI0111 16:54:35.926876 707 log.go:181] (0x28375e0) Reply frame received for 1\nI0111 16:54:35.927384 707 log.go:181] (0x28375e0) (0x2836460) Create stream\nI0111 16:54:35.927452 707 log.go:181] (0x28375e0) (0x2836460) Stream added, broadcasting: 3\nI0111 16:54:35.928979 707 log.go:181] (0x28375e0) Reply frame received for 3\nI0111 16:54:35.929250 707 log.go:181] (0x28375e0) (0x275b1f0) Create stream\nI0111 16:54:35.929318 707 log.go:181] (0x28375e0) (0x275b1f0) Stream added, broadcasting: 5\nI0111 16:54:35.930306 707 log.go:181] (0x28375e0) Reply frame received for 5\nI0111 16:54:36.032598 707 log.go:181] (0x28375e0) Data frame received for 5\nI0111 16:54:36.033204 707 log.go:181] (0x28375e0) Data frame received for 1\nI0111 16:54:36.033450 707 log.go:181] (0x2837730) (1) Data frame handling\nI0111 16:54:36.033845 707 log.go:181] (0x28375e0) Data frame received for 3\nI0111 16:54:36.033945 707 log.go:181] (0x2836460) (3) Data frame handling\nI0111 16:54:36.034755 707 log.go:181] (0x275b1f0) (5) Data frame handling\nI0111 16:54:36.035856 707 log.go:181] (0x275b1f0) (5) Data frame sent\nI0111 16:54:36.036496 707 log.go:181] (0x2837730) (1) Data frame sent\nI0111 16:54:36.038244 707 log.go:181] (0x28375e0) (0x2837730) Stream removed, broadcasting: 1\n+ nc -zv -t -w 2 10.96.133.37 80\nConnection to 10.96.133.37 80 port [tcp/http] succeeded!\nI0111 16:54:36.039003 707 log.go:181] (0x28375e0) Data frame received for 5\nI0111 16:54:36.039218 707 log.go:181] (0x275b1f0) (5) Data frame handling\nI0111 16:54:36.039448 707 log.go:181] (0x28375e0) Go away received\nI0111 16:54:36.042147 707 log.go:181] (0x28375e0) (0x2837730) Stream removed, broadcasting: 1\nI0111 16:54:36.042406 707 log.go:181] (0x28375e0) (0x2836460) Stream removed, broadcasting: 3\nI0111 16:54:36.042598 707 log.go:181] (0x28375e0) (0x275b1f0) Stream removed, broadcasting: 5\n" Jan 11 16:54:36.050: INFO: stdout: "" Jan 11 16:54:36.050: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:54:36.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4507" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:14.410 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":309,"completed":105,"skipped":1866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:54:36.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:54:36.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5270" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":309,"completed":106,"skipped":1911,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:54:36.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 11 16:54:36.551: INFO: Waiting up to 5m0s for pod "pod-6263b100-2fb3-4abd-ba0f-9bdae5d6ad3f" in namespace "emptydir-2670" to be "Succeeded or Failed" Jan 11 16:54:36.561: INFO: Pod "pod-6263b100-2fb3-4abd-ba0f-9bdae5d6ad3f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.367808ms Jan 11 16:54:38.574: INFO: Pod "pod-6263b100-2fb3-4abd-ba0f-9bdae5d6ad3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023270607s Jan 11 16:54:40.585: INFO: Pod "pod-6263b100-2fb3-4abd-ba0f-9bdae5d6ad3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033443284s STEP: Saw pod success Jan 11 16:54:40.585: INFO: Pod "pod-6263b100-2fb3-4abd-ba0f-9bdae5d6ad3f" satisfied condition "Succeeded or Failed" Jan 11 16:54:40.683: INFO: Trying to get logs from node leguer-worker2 pod pod-6263b100-2fb3-4abd-ba0f-9bdae5d6ad3f container test-container: STEP: delete the pod Jan 11 16:54:40.753: INFO: Waiting for pod pod-6263b100-2fb3-4abd-ba0f-9bdae5d6ad3f to disappear Jan 11 16:54:40.757: INFO: Pod pod-6263b100-2fb3-4abd-ba0f-9bdae5d6ad3f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:54:40.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2670" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":107,"skipped":1929,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:54:40.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:54:40.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4" in namespace "projected-986" to be "Succeeded or Failed" Jan 11 16:54:40.896: INFO: Pod "downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.310973ms Jan 11 16:54:42.905: INFO: Pod "downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02748158s Jan 11 16:54:44.941: INFO: Pod "downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4": Phase="Running", Reason="", readiness=true. Elapsed: 4.062856817s Jan 11 16:54:46.949: INFO: Pod "downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070911135s STEP: Saw pod success Jan 11 16:54:46.949: INFO: Pod "downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4" satisfied condition "Succeeded or Failed" Jan 11 16:54:46.955: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4 container client-container: STEP: delete the pod Jan 11 16:54:47.007: INFO: Waiting for pod downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4 to disappear Jan 11 16:54:47.011: INFO: Pod downwardapi-volume-50fc1246-c12b-4989-adf8-5389b8e06ce4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:54:47.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-986" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":309,"completed":108,"skipped":1931,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:54:47.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:54:47.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2eb8c08a-7ced-4452-9119-41677b92386e" in namespace "downward-api-8871" to be "Succeeded or Failed" Jan 11 16:54:47.173: INFO: Pod "downwardapi-volume-2eb8c08a-7ced-4452-9119-41677b92386e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.660984ms Jan 11 16:54:49.179: INFO: Pod "downwardapi-volume-2eb8c08a-7ced-4452-9119-41677b92386e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015429697s Jan 11 16:54:51.188: INFO: Pod "downwardapi-volume-2eb8c08a-7ced-4452-9119-41677b92386e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024178639s STEP: Saw pod success Jan 11 16:54:51.189: INFO: Pod "downwardapi-volume-2eb8c08a-7ced-4452-9119-41677b92386e" satisfied condition "Succeeded or Failed" Jan 11 16:54:51.194: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-2eb8c08a-7ced-4452-9119-41677b92386e container client-container: STEP: delete the pod Jan 11 16:54:51.231: INFO: Waiting for pod downwardapi-volume-2eb8c08a-7ced-4452-9119-41677b92386e to disappear Jan 11 16:54:51.237: INFO: Pod downwardapi-volume-2eb8c08a-7ced-4452-9119-41677b92386e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:54:51.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8871" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":109,"skipped":1934,"failed":0} S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:54:51.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:54:51.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6943" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":309,"completed":110,"skipped":1935,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:54:51.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:55:04.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5859" for this suite. • [SLOW TEST:13.421 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":309,"completed":111,"skipped":1946,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:55:04.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:55:21.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6178" for this suite. • [SLOW TEST:16.263 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":309,"completed":112,"skipped":1949,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:55:21.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 11 16:55:21.159: INFO: Waiting up to 5m0s for pod "pod-ec6bd8b2-0115-44cb-b57b-c4f32b6c0619" in namespace "emptydir-7774" to be "Succeeded or Failed" Jan 11 16:55:21.167: INFO: Pod "pod-ec6bd8b2-0115-44cb-b57b-c4f32b6c0619": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191259ms Jan 11 16:55:23.175: INFO: Pod "pod-ec6bd8b2-0115-44cb-b57b-c4f32b6c0619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01589236s Jan 11 16:55:25.204: INFO: Pod "pod-ec6bd8b2-0115-44cb-b57b-c4f32b6c0619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045582002s STEP: Saw pod success Jan 11 16:55:25.205: INFO: Pod "pod-ec6bd8b2-0115-44cb-b57b-c4f32b6c0619" satisfied condition "Succeeded or Failed" Jan 11 16:55:25.209: INFO: Trying to get logs from node leguer-worker pod pod-ec6bd8b2-0115-44cb-b57b-c4f32b6c0619 container test-container: STEP: delete the pod Jan 11 16:55:25.265: INFO: Waiting for pod pod-ec6bd8b2-0115-44cb-b57b-c4f32b6c0619 to disappear Jan 11 16:55:25.371: INFO: Pod pod-ec6bd8b2-0115-44cb-b57b-c4f32b6c0619 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:55:25.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7774" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":113,"skipped":1976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:55:25.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-projected-vgsw STEP: Creating a pod to test atomic-volume-subpath Jan 11 16:55:25.487: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vgsw" in namespace "subpath-5645" to be "Succeeded or Failed" Jan 11 16:55:25.510: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Pending", Reason="", readiness=false. Elapsed: 22.560151ms Jan 11 16:55:27.527: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040034217s Jan 11 16:55:29.536: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 4.048290164s Jan 11 16:55:31.543: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 6.056012128s Jan 11 16:55:33.552: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 8.064268461s Jan 11 16:55:35.560: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 10.072635957s Jan 11 16:55:37.568: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 12.080712373s Jan 11 16:55:39.577: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 14.089245578s Jan 11 16:55:41.593: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 16.105731248s Jan 11 16:55:43.600: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 18.112880077s Jan 11 16:55:45.624: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 20.136243461s Jan 11 16:55:47.635: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Running", Reason="", readiness=true. Elapsed: 22.147667712s Jan 11 16:55:49.642: INFO: Pod "pod-subpath-test-projected-vgsw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.154962456s STEP: Saw pod success Jan 11 16:55:49.643: INFO: Pod "pod-subpath-test-projected-vgsw" satisfied condition "Succeeded or Failed" Jan 11 16:55:49.647: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-projected-vgsw container test-container-subpath-projected-vgsw: STEP: delete the pod Jan 11 16:55:49.678: INFO: Waiting for pod pod-subpath-test-projected-vgsw to disappear Jan 11 16:55:49.706: INFO: Pod pod-subpath-test-projected-vgsw no longer exists STEP: Deleting pod pod-subpath-test-projected-vgsw Jan 11 16:55:49.706: INFO: Deleting pod "pod-subpath-test-projected-vgsw" in namespace "subpath-5645" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:55:49.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5645" for this suite. • [SLOW TEST:24.530 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":309,"completed":114,"skipped":2005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:55:49.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token STEP: reading a file in the container Jan 11 16:55:54.607: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3808 pod-service-account-0d8d90a7-0839-4556-bd1d-fb975ce47984 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 11 16:55:55.996: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3808 pod-service-account-0d8d90a7-0839-4556-bd1d-fb975ce47984 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 11 16:55:57.468: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3808 pod-service-account-0d8d90a7-0839-4556-bd1d-fb975ce47984 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:55:58.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3808" for this suite. • [SLOW TEST:8.988 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":309,"completed":115,"skipped":2043,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:55:58.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:56:59.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8732" for this suite. • [SLOW TEST:60.126 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":309,"completed":116,"skipped":2051,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:56:59.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:57:05.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7352" for this suite. STEP: Destroying namespace "nsdeletetest-7788" for this suite. Jan 11 16:57:05.402: INFO: Namespace nsdeletetest-7788 was already deleted STEP: Destroying namespace "nsdeletetest-1313" for this suite. • [SLOW TEST:6.365 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":309,"completed":117,"skipped":2059,"failed":0} SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:57:05.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 16:57:05.502: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 11 16:57:05.547: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 11 16:57:10.553: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 11 16:57:10.554: INFO: Creating deployment "test-rolling-update-deployment" Jan 11 16:57:10.561: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 11 16:57:10.570: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 11 16:57:12.586: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 11 16:57:12.592: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745981030, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745981030, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745981030, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745981030, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-6b6bf9df46\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 16:57:14.600: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 16:57:14.621: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4607 459eb063-fb1f-4e0e-b9cc-505ec2a14646 198318 1 2021-01-11 16:57:10 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-01-11 16:57:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-11 16:57:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xbcd7f48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-11 16:57:10 +0000 UTC,LastTransitionTime:2021-01-11 16:57:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2021-01-11 16:57:13 +0000 UTC,LastTransitionTime:2021-01-11 16:57:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 11 16:57:14.629: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-4607 8b5c8a23-2a60-40fe-9756-15af81d191bf 198307 1 2021-01-11 16:57:10 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 459eb063-fb1f-4e0e-b9cc-505ec2a14646 0x8757167 0x8757168}] [] [{kube-controller-manager Update apps/v1 2021-01-11 16:57:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"459eb063-fb1f-4e0e-b9cc-505ec2a14646\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x87572e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 16:57:14.629: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 11 16:57:14.630: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4607 7e9352c7-4f8c-4bdf-b51e-5fdc4994d200 198317 2 2021-01-11 16:57:05 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 459eb063-fb1f-4e0e-b9cc-505ec2a14646 0x8756f07 0x8756f08}] [] [{e2e.test Update apps/v1 2021-01-11 16:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-11 16:57:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"459eb063-fb1f-4e0e-b9cc-505ec2a14646\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x8756fa8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 16:57:14.638: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-b89hx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-b89hx test-rolling-update-deployment-6b6bf9df46- deployment-4607 f9426e45-3fc6-4a9a-b9d6-1dbda88f6cd9 198306 0 2021-01-11 16:57:10 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 8b5c8a23-2a60-40fe-9756-15af81d191bf 0x8f22047 0x8f22048}] [] [{kube-controller-manager Update v1 2021-01-11 16:57:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b5c8a23-2a60-40fe-9756-15af81d191bf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 16:57:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tbgqq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tbgqq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tbgqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:57:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:57:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:57:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 16:57:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.28,StartTime:2021-01-11 16:57:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 16:57:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://2674983750bbe3692cffb55440a47b7f9ae4bbf37e4a35f5bb200ae3b6d1c069,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:57:14.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4607" for this suite. • [SLOW TEST:9.246 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":309,"completed":118,"skipped":2061,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:57:14.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-8239dc2f-062f-44c9-806b-b8826c5ccab0 STEP: Creating a pod to test consume secrets Jan 11 16:57:14.782: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ac6cd6e-4443-41cb-876d-e8312b1c0583" in namespace "projected-7338" to be "Succeeded or Failed" Jan 11 16:57:14.810: INFO: Pod "pod-projected-secrets-5ac6cd6e-4443-41cb-876d-e8312b1c0583": Phase="Pending", Reason="", readiness=false. Elapsed: 27.351888ms Jan 11 16:57:16.816: INFO: Pod "pod-projected-secrets-5ac6cd6e-4443-41cb-876d-e8312b1c0583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033269103s Jan 11 16:57:18.824: INFO: Pod "pod-projected-secrets-5ac6cd6e-4443-41cb-876d-e8312b1c0583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041837885s STEP: Saw pod success Jan 11 16:57:18.825: INFO: Pod "pod-projected-secrets-5ac6cd6e-4443-41cb-876d-e8312b1c0583" satisfied condition "Succeeded or Failed" Jan 11 16:57:18.830: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-5ac6cd6e-4443-41cb-876d-e8312b1c0583 container projected-secret-volume-test: STEP: delete the pod Jan 11 16:57:18.911: INFO: Waiting for pod pod-projected-secrets-5ac6cd6e-4443-41cb-876d-e8312b1c0583 to disappear Jan 11 16:57:18.916: INFO: Pod pod-projected-secrets-5ac6cd6e-4443-41cb-876d-e8312b1c0583 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:57:18.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7338" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":119,"skipped":2065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:57:18.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-16e02012-2a74-49f1-837e-e4ff9c5e3ea7 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:57:25.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3150" for this suite. • [SLOW TEST:6.456 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":120,"skipped":2117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:57:25.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 11 16:57:32.934: INFO: 8 pods remaining Jan 11 16:57:32.934: INFO: 0 pods has nil DeletionTimestamp Jan 11 16:57:32.934: INFO: Jan 11 16:57:33.394: INFO: 0 pods remaining Jan 11 16:57:33.395: INFO: 0 pods has nil DeletionTimestamp Jan 11 16:57:33.395: INFO: STEP: Gathering metrics W0111 16:57:34.298755 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 16:58:36.327: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:58:36.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1979" for this suite. • [SLOW TEST:70.946 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":309,"completed":121,"skipped":2156,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:58:36.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 11 16:58:40.716: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:58:40.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9955" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":122,"skipped":2178,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:58:40.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 16:58:40.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-330c067f-5031-45fc-a0a7-1aae938c9136" in namespace "projected-2223" to be "Succeeded or Failed" Jan 11 16:58:40.916: INFO: Pod "downwardapi-volume-330c067f-5031-45fc-a0a7-1aae938c9136": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141305ms Jan 11 16:58:42.955: INFO: Pod "downwardapi-volume-330c067f-5031-45fc-a0a7-1aae938c9136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042760361s Jan 11 16:58:44.963: INFO: Pod "downwardapi-volume-330c067f-5031-45fc-a0a7-1aae938c9136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051096109s STEP: Saw pod success Jan 11 16:58:44.963: INFO: Pod "downwardapi-volume-330c067f-5031-45fc-a0a7-1aae938c9136" satisfied condition "Succeeded or Failed" Jan 11 16:58:44.969: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-330c067f-5031-45fc-a0a7-1aae938c9136 container client-container: STEP: delete the pod Jan 11 16:58:45.027: INFO: Waiting for pod downwardapi-volume-330c067f-5031-45fc-a0a7-1aae938c9136 to disappear Jan 11 16:58:45.034: INFO: Pod downwardapi-volume-330c067f-5031-45fc-a0a7-1aae938c9136 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:58:45.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2223" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":309,"completed":123,"skipped":2199,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:58:45.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 STEP: creating an pod Jan 11 16:58:45.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5171 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 11 16:58:46.436: INFO: stderr: "" Jan 11 16:58:46.436: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Waiting for log generator to start. Jan 11 16:58:46.437: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 11 16:58:46.437: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5171" to be "running and ready, or succeeded" Jan 11 16:58:46.443: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.724649ms Jan 11 16:58:48.454: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015988338s Jan 11 16:58:50.461: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.023082822s Jan 11 16:58:50.461: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 11 16:58:50.461: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 11 16:58:50.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5171 logs logs-generator logs-generator' Jan 11 16:58:51.740: INFO: stderr: "" Jan 11 16:58:51.740: INFO: stdout: "I0111 16:58:49.072994 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/xw2g 420\nI0111 16:58:49.273166 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/kgb4 227\nI0111 16:58:49.473207 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/4qz 280\nI0111 16:58:49.673151 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/fjp 575\nI0111 16:58:49.873123 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/9bv2 318\nI0111 16:58:50.073132 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/hvb 417\nI0111 16:58:50.273247 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/q28 449\nI0111 16:58:50.473115 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/52c 294\nI0111 16:58:50.673031 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/2vl 223\nI0111 16:58:50.873120 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/hkmg 580\nI0111 16:58:51.073140 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/qbk 318\nI0111 16:58:51.273073 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/572 305\nI0111 16:58:51.473140 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/6ms4 537\nI0111 16:58:51.673190 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/b5kz 329\n" STEP: limiting log lines Jan 11 16:58:51.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5171 logs logs-generator logs-generator --tail=1' Jan 11 16:58:52.963: INFO: stderr: "" Jan 11 16:58:52.963: INFO: stdout: "I0111 16:58:52.873194 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/k6z 223\n" Jan 11 16:58:52.963: INFO: got output "I0111 16:58:52.873194 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/k6z 223\n" STEP: limiting log bytes Jan 11 16:58:52.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5171 logs logs-generator logs-generator --limit-bytes=1' Jan 11 16:58:54.137: INFO: stderr: "" Jan 11 16:58:54.137: INFO: stdout: "I" Jan 11 16:58:54.137: INFO: got output "I" STEP: exposing timestamps Jan 11 16:58:54.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5171 logs logs-generator logs-generator --tail=1 --timestamps' Jan 11 16:58:55.345: INFO: stderr: "" Jan 11 16:58:55.345: INFO: stdout: "2021-01-11T16:58:55.273388906Z I0111 16:58:55.273171 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/c2v 257\n" Jan 11 16:58:55.346: INFO: got output "2021-01-11T16:58:55.273388906Z I0111 16:58:55.273171 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/c2v 257\n" STEP: restricting to a time range Jan 11 16:58:57.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5171 logs logs-generator logs-generator --since=1s' Jan 11 16:58:59.121: INFO: stderr: "" Jan 11 16:58:59.121: INFO: stdout: "I0111 16:58:58.273118 1 logs_generator.go:76] 46 GET /api/v1/namespaces/ns/pods/mk6 291\nI0111 16:58:58.473089 1 logs_generator.go:76] 47 GET /api/v1/namespaces/default/pods/472 235\nI0111 16:58:58.673218 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/kube-system/pods/ths 217\nI0111 16:58:58.873122 1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/qtzf 507\nI0111 16:58:59.073117 1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/qww 368\n" Jan 11 16:58:59.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5171 logs logs-generator logs-generator --since=24h' Jan 11 16:59:00.617: INFO: stderr: "" Jan 11 16:59:00.617: INFO: stdout: "I0111 16:58:49.072994 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/xw2g 420\nI0111 16:58:49.273166 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/kgb4 227\nI0111 16:58:49.473207 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/4qz 280\nI0111 16:58:49.673151 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/fjp 575\nI0111 16:58:49.873123 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/9bv2 318\nI0111 16:58:50.073132 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/hvb 417\nI0111 16:58:50.273247 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/q28 449\nI0111 16:58:50.473115 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/52c 294\nI0111 16:58:50.673031 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/2vl 223\nI0111 16:58:50.873120 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/hkmg 580\nI0111 16:58:51.073140 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/qbk 318\nI0111 16:58:51.273073 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/572 305\nI0111 16:58:51.473140 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/6ms4 537\nI0111 16:58:51.673190 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/b5kz 329\nI0111 16:58:51.873166 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/zf5w 540\nI0111 16:58:52.073125 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/mf8 380\nI0111 16:58:52.273127 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/z5k 445\nI0111 16:58:52.473130 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/hgtf 335\nI0111 16:58:52.673135 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/6nj 536\nI0111 16:58:52.873194 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/k6z 223\nI0111 16:58:53.073124 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/5dld 333\nI0111 16:58:53.273088 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/2rmc 299\nI0111 16:58:53.473130 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/2kj 347\nI0111 16:58:53.673116 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/7z9 556\nI0111 16:58:53.873104 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/smlv 424\nI0111 16:58:54.073093 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/nlqc 477\nI0111 16:58:54.273122 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/hh8k 488\nI0111 16:58:54.473117 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/q2l 260\nI0111 16:58:54.673117 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/7drc 415\nI0111 16:58:54.873147 1 logs_generator.go:76] 29 GET /api/v1/namespaces/ns/pods/ffz 516\nI0111 16:58:55.073139 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/tx5 559\nI0111 16:58:55.273171 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/c2v 257\nI0111 16:58:55.473108 1 logs_generator.go:76] 32 POST /api/v1/namespaces/default/pods/7jpx 347\nI0111 16:58:55.673211 1 logs_generator.go:76] 33 PUT /api/v1/namespaces/default/pods/tttd 536\nI0111 16:58:55.873147 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/default/pods/m5vx 248\nI0111 16:58:56.073165 1 logs_generator.go:76] 35 POST /api/v1/namespaces/default/pods/49c 517\nI0111 16:58:56.273170 1 logs_generator.go:76] 36 GET /api/v1/namespaces/default/pods/xxwm 544\nI0111 16:58:56.473194 1 logs_generator.go:76] 37 POST /api/v1/namespaces/ns/pods/k457 585\nI0111 16:58:56.673186 1 logs_generator.go:76] 38 GET /api/v1/namespaces/default/pods/pls5 367\nI0111 16:58:56.873187 1 logs_generator.go:76] 39 GET /api/v1/namespaces/ns/pods/t9c 517\nI0111 16:58:57.073195 1 logs_generator.go:76] 40 PUT /api/v1/namespaces/default/pods/t7z9 544\nI0111 16:58:57.273150 1 logs_generator.go:76] 41 GET /api/v1/namespaces/default/pods/xgbt 367\nI0111 16:58:57.473152 1 logs_generator.go:76] 42 GET /api/v1/namespaces/ns/pods/7vw 332\nI0111 16:58:57.673144 1 logs_generator.go:76] 43 GET /api/v1/namespaces/default/pods/vmxv 281\nI0111 16:58:57.873184 1 logs_generator.go:76] 44 POST /api/v1/namespaces/default/pods/dpjz 447\nI0111 16:58:58.073128 1 logs_generator.go:76] 45 PUT /api/v1/namespaces/kube-system/pods/rxx 369\nI0111 16:58:58.273118 1 logs_generator.go:76] 46 GET /api/v1/namespaces/ns/pods/mk6 291\nI0111 16:58:58.473089 1 logs_generator.go:76] 47 GET /api/v1/namespaces/default/pods/472 235\nI0111 16:58:58.673218 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/kube-system/pods/ths 217\nI0111 16:58:58.873122 1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/qtzf 507\nI0111 16:58:59.073117 1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/qww 368\nI0111 16:58:59.273138 1 logs_generator.go:76] 51 PUT /api/v1/namespaces/kube-system/pods/pjj 388\nI0111 16:58:59.473100 1 logs_generator.go:76] 52 GET /api/v1/namespaces/kube-system/pods/f5wq 430\nI0111 16:58:59.673108 1 logs_generator.go:76] 53 POST /api/v1/namespaces/kube-system/pods/66zs 286\nI0111 16:58:59.873135 1 logs_generator.go:76] 54 GET /api/v1/namespaces/ns/pods/w6w 547\nI0111 16:59:00.073135 1 logs_generator.go:76] 55 PUT /api/v1/namespaces/kube-system/pods/zqz5 204\nI0111 16:59:00.273135 1 logs_generator.go:76] 56 POST /api/v1/namespaces/default/pods/8mjs 579\nI0111 16:59:00.473157 1 logs_generator.go:76] 57 GET /api/v1/namespaces/default/pods/k98m 304\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 Jan 11 16:59:00.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5171 delete pod logs-generator' Jan 11 16:59:29.831: INFO: stderr: "" Jan 11 16:59:29.831: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 16:59:29.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5171" for this suite. • [SLOW TEST:44.801 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":309,"completed":124,"skipped":2217,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 16:59:29.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-501 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating statefulset ss in namespace statefulset-501 Jan 11 16:59:29.980: INFO: Found 0 stateful pods, waiting for 1 Jan 11 16:59:39.989: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 11 16:59:40.087: INFO: Deleting all statefulset in ns statefulset-501 Jan 11 16:59:40.104: INFO: Scaling statefulset ss to 0 Jan 11 17:00:30.238: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 17:00:30.243: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:00:30.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-501" for this suite. • [SLOW TEST:60.428 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":309,"completed":125,"skipped":2231,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:00:30.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 11 17:00:30.393: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 17:00:30.412: INFO: Waiting for terminating namespaces to be deleted... Jan 11 17:00:30.417: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 11 17:00:30.431: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.431: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 17:00:30.432: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.432: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 17:00:30.432: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.432: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 17:00:30.432: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.432: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 17:00:30.432: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.432: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 17:00:30.432: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.432: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 17:00:30.432: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.432: INFO: Container chaos-mesh ready: true, restart count 0 Jan 11 17:00:30.432: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.432: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 17:00:30.432: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.432: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 17:00:30.432: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.433: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 17:00:30.433: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 11 17:00:30.447: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 17:00:30.447: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 17:00:30.447: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 17:00:30.447: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 17:00:30.447: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 17:00:30.447: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 17:00:30.447: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 17:00:30.447: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 17:00:30.447: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 11 17:00:30.447: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16593c1b7c2f926b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:00:31.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9312" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":309,"completed":126,"skipped":2236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:00:31.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 17:00:31.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ffb4159-c407-4658-a336-3d7adc02d9c5" in namespace "projected-3075" to be "Succeeded or Failed" Jan 11 17:00:31.673: INFO: Pod "downwardapi-volume-5ffb4159-c407-4658-a336-3d7adc02d9c5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.775998ms Jan 11 17:00:33.682: INFO: Pod "downwardapi-volume-5ffb4159-c407-4658-a336-3d7adc02d9c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024770037s Jan 11 17:00:35.698: INFO: Pod "downwardapi-volume-5ffb4159-c407-4658-a336-3d7adc02d9c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040517849s STEP: Saw pod success Jan 11 17:00:35.698: INFO: Pod "downwardapi-volume-5ffb4159-c407-4658-a336-3d7adc02d9c5" satisfied condition "Succeeded or Failed" Jan 11 17:00:35.824: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-5ffb4159-c407-4658-a336-3d7adc02d9c5 container client-container: STEP: delete the pod Jan 11 17:00:35.906: INFO: Waiting for pod downwardapi-volume-5ffb4159-c407-4658-a336-3d7adc02d9c5 to disappear Jan 11 17:00:35.917: INFO: Pod downwardapi-volume-5ffb4159-c407-4658-a336-3d7adc02d9c5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:00:35.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3075" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":127,"skipped":2284,"failed":0} S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:00:35.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating pod Jan 11 17:00:40.221: INFO: Pod pod-hostip-4918a53e-4a43-4309-ae4d-7ee79a9dad07 has hostIP: 172.18.0.12 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:00:40.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3674" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":309,"completed":128,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:00:40.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create deployment with httpd image Jan 11 17:00:40.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7504 create -f -' Jan 11 17:00:45.361: INFO: stderr: "" Jan 11 17:00:45.361: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jan 11 17:00:45.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7504 diff -f -' Jan 11 17:00:49.387: INFO: rc: 1 Jan 11 17:00:49.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7504 delete -f -' Jan 11 17:00:50.632: INFO: stderr: "" Jan 11 17:00:50.632: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:00:50.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7504" for this suite. • [SLOW TEST:10.408 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:878 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":309,"completed":129,"skipped":2301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:00:50.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:00:50.770: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 11 17:01:13.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 --namespace=crd-publish-openapi-919 create -f -' Jan 11 17:01:19.270: INFO: stderr: "" Jan 11 17:01:19.271: INFO: stdout: "e2e-test-crd-publish-openapi-7847-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 11 17:01:19.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 --namespace=crd-publish-openapi-919 delete e2e-test-crd-publish-openapi-7847-crds test-foo' Jan 11 17:01:20.470: INFO: stderr: "" Jan 11 17:01:20.470: INFO: stdout: "e2e-test-crd-publish-openapi-7847-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 11 17:01:20.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 --namespace=crd-publish-openapi-919 apply -f -' Jan 11 17:01:23.162: INFO: stderr: "" Jan 11 17:01:23.162: INFO: stdout: "e2e-test-crd-publish-openapi-7847-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 11 17:01:23.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 --namespace=crd-publish-openapi-919 delete e2e-test-crd-publish-openapi-7847-crds test-foo' Jan 11 17:01:24.416: INFO: stderr: "" Jan 11 17:01:24.417: INFO: stdout: "e2e-test-crd-publish-openapi-7847-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 11 17:01:24.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 --namespace=crd-publish-openapi-919 create -f -' Jan 11 17:01:26.765: INFO: rc: 1 Jan 11 17:01:26.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 --namespace=crd-publish-openapi-919 apply -f -' Jan 11 17:01:30.053: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 11 17:01:30.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 --namespace=crd-publish-openapi-919 create -f -' Jan 11 17:01:32.612: INFO: rc: 1 Jan 11 17:01:32.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 --namespace=crd-publish-openapi-919 apply -f -' Jan 11 17:01:35.264: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 11 17:01:35.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 explain e2e-test-crd-publish-openapi-7847-crds' Jan 11 17:01:38.552: INFO: stderr: "" Jan 11 17:01:38.553: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7847-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 11 17:01:38.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 explain e2e-test-crd-publish-openapi-7847-crds.metadata' Jan 11 17:01:41.466: INFO: stderr: "" Jan 11 17:01:41.467: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7847-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 11 17:01:41.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 explain e2e-test-crd-publish-openapi-7847-crds.spec' Jan 11 17:01:43.864: INFO: stderr: "" Jan 11 17:01:43.864: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7847-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 11 17:01:43.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 explain e2e-test-crd-publish-openapi-7847-crds.spec.bars' Jan 11 17:01:46.631: INFO: stderr: "" Jan 11 17:01:46.631: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7847-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 11 17:01:46.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-919 explain e2e-test-crd-publish-openapi-7847-crds.spec.bars2' Jan 11 17:01:49.488: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:02:11.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-919" for this suite. • [SLOW TEST:81.355 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":309,"completed":130,"skipped":2325,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:02:12.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-9e749f0c-03bd-4352-b4dd-4831c1d67439 STEP: Creating a pod to test consume secrets Jan 11 17:02:12.103: INFO: Waiting up to 5m0s for pod "pod-secrets-045b285b-ce85-4a27-8c71-e09a57a7272c" in namespace "secrets-2758" to be "Succeeded or Failed" Jan 11 17:02:12.116: INFO: Pod "pod-secrets-045b285b-ce85-4a27-8c71-e09a57a7272c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.413862ms Jan 11 17:02:14.124: INFO: Pod "pod-secrets-045b285b-ce85-4a27-8c71-e09a57a7272c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02065743s Jan 11 17:02:16.133: INFO: Pod "pod-secrets-045b285b-ce85-4a27-8c71-e09a57a7272c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029742099s STEP: Saw pod success Jan 11 17:02:16.133: INFO: Pod "pod-secrets-045b285b-ce85-4a27-8c71-e09a57a7272c" satisfied condition "Succeeded or Failed" Jan 11 17:02:16.137: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-045b285b-ce85-4a27-8c71-e09a57a7272c container secret-volume-test: STEP: delete the pod Jan 11 17:02:16.214: INFO: Waiting for pod pod-secrets-045b285b-ce85-4a27-8c71-e09a57a7272c to disappear Jan 11 17:02:16.220: INFO: Pod pod-secrets-045b285b-ce85-4a27-8c71-e09a57a7272c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:02:16.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2758" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":131,"skipped":2326,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:02:16.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 11 17:02:27.566: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 11 17:02:29.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745981347, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745981347, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745981347, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745981347, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 17:02:32.652: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:02:32.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:02:33.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-242" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:17.680 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":309,"completed":132,"skipped":2331,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:02:33.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC Jan 11 17:02:33.997: INFO: namespace kubectl-1327 Jan 11 17:02:33.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1327 create -f -' Jan 11 17:02:36.034: INFO: stderr: "" Jan 11 17:02:36.034: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 11 17:02:37.042: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 17:02:37.042: INFO: Found 0 / 1 Jan 11 17:02:38.041: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 17:02:38.042: INFO: Found 0 / 1 Jan 11 17:02:39.047: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 17:02:39.047: INFO: Found 1 / 1 Jan 11 17:02:39.047: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 11 17:02:39.052: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 17:02:39.052: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 17:02:39.053: INFO: wait on agnhost-primary startup in kubectl-1327 Jan 11 17:02:39.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1327 logs agnhost-primary-fttt6 agnhost-primary' Jan 11 17:02:40.232: INFO: stderr: "" Jan 11 17:02:40.232: INFO: stdout: "Paused\n" STEP: exposing RC Jan 11 17:02:40.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1327 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 11 17:02:41.543: INFO: stderr: "" Jan 11 17:02:41.543: INFO: stdout: "service/rm2 exposed\n" Jan 11 17:02:41.550: INFO: Service rm2 in namespace kubectl-1327 found. STEP: exposing service Jan 11 17:02:43.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1327 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 11 17:02:44.853: INFO: stderr: "" Jan 11 17:02:44.853: INFO: stdout: "service/rm3 exposed\n" Jan 11 17:02:44.862: INFO: Service rm3 in namespace kubectl-1327 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:02:46.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1327" for this suite. • [SLOW TEST:12.977 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":309,"completed":133,"skipped":2337,"failed":0} S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:02:46.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-4d1c2ab9-ff13-4cdc-89bb-219d4253568c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4d1c2ab9-ff13-4cdc-89bb-219d4253568c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:02:53.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-204" for this suite. • [SLOW TEST:6.211 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":134,"skipped":2338,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:02:53.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-8430 STEP: creating service affinity-nodeport-transition in namespace services-8430 STEP: creating replication controller affinity-nodeport-transition in namespace services-8430 I0111 17:02:53.394820 10 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-8430, replica count: 3 I0111 17:02:56.446506 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 17:02:59.447372 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 17:02:59.469: INFO: Creating new exec pod Jan 11 17:03:04.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8430 exec execpod-affinity55vm8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 11 17:03:06.001: INFO: stderr: "I0111 17:03:05.881753 1356 log.go:181] (0x2898000) (0x2898070) Create stream\nI0111 17:03:05.883697 1356 log.go:181] (0x2898000) (0x2898070) Stream added, broadcasting: 1\nI0111 17:03:05.898231 1356 log.go:181] (0x2898000) Reply frame received for 1\nI0111 17:03:05.898904 1356 log.go:181] (0x2898000) (0x2898150) Create stream\nI0111 17:03:05.899019 1356 log.go:181] (0x2898000) (0x2898150) Stream added, broadcasting: 3\nI0111 17:03:05.902063 1356 log.go:181] (0x2898000) Reply frame received for 3\nI0111 17:03:05.902380 1356 log.go:181] (0x2898000) (0x2898310) Create stream\nI0111 17:03:05.902466 1356 log.go:181] (0x2898000) (0x2898310) Stream added, broadcasting: 5\nI0111 17:03:05.904192 1356 log.go:181] (0x2898000) Reply frame received for 5\nI0111 17:03:05.982900 1356 log.go:181] (0x2898000) Data frame received for 5\nI0111 17:03:05.983270 1356 log.go:181] (0x2898310) (5) Data frame handling\nI0111 17:03:05.983590 1356 log.go:181] (0x2898000) Data frame received for 3\nI0111 17:03:05.983755 1356 log.go:181] (0x2898150) (3) Data frame handling\nI0111 17:03:05.984024 1356 log.go:181] (0x2898310) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0111 17:03:05.984522 1356 log.go:181] (0x2898000) Data frame received for 1\nI0111 17:03:05.984953 1356 log.go:181] (0x2898070) (1) Data frame handling\nI0111 17:03:05.985122 1356 log.go:181] (0x2898000) Data frame received for 5\nI0111 17:03:05.985274 1356 log.go:181] (0x2898310) (5) Data frame handling\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0111 17:03:05.985446 1356 log.go:181] (0x2898310) (5) Data frame sent\nI0111 17:03:05.985598 1356 log.go:181] (0x2898000) Data frame received for 5\nI0111 17:03:05.985769 1356 log.go:181] (0x2898070) (1) Data frame sent\nI0111 17:03:05.986023 1356 log.go:181] (0x2898310) (5) Data frame handling\nI0111 17:03:05.986830 1356 log.go:181] (0x2898000) (0x2898070) Stream removed, broadcasting: 1\nI0111 17:03:05.988797 1356 log.go:181] (0x2898000) Go away received\nI0111 17:03:05.992030 1356 log.go:181] (0x2898000) (0x2898070) Stream removed, broadcasting: 1\nI0111 17:03:05.992268 1356 log.go:181] (0x2898000) (0x2898150) Stream removed, broadcasting: 3\nI0111 17:03:05.992468 1356 log.go:181] (0x2898000) (0x2898310) Stream removed, broadcasting: 5\n" Jan 11 17:03:06.001: INFO: stdout: "" Jan 11 17:03:06.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8430 exec execpod-affinity55vm8 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.48 80' Jan 11 17:03:07.444: INFO: stderr: "I0111 17:03:07.311605 1376 log.go:181] (0x2bbe000) (0x2bbe070) Create stream\nI0111 17:03:07.313378 1376 log.go:181] (0x2bbe000) (0x2bbe070) Stream added, broadcasting: 1\nI0111 17:03:07.330429 1376 log.go:181] (0x2bbe000) Reply frame received for 1\nI0111 17:03:07.330967 1376 log.go:181] (0x2bbe000) (0x2bbe150) Create stream\nI0111 17:03:07.331046 1376 log.go:181] (0x2bbe000) (0x2bbe150) Stream added, broadcasting: 3\nI0111 17:03:07.332247 1376 log.go:181] (0x2bbe000) Reply frame received for 3\nI0111 17:03:07.332489 1376 log.go:181] (0x2bbe000) (0x25c21c0) Create stream\nI0111 17:03:07.332561 1376 log.go:181] (0x2bbe000) (0x25c21c0) Stream added, broadcasting: 5\nI0111 17:03:07.333925 1376 log.go:181] (0x2bbe000) Reply frame received for 5\nI0111 17:03:07.423152 1376 log.go:181] (0x2bbe000) Data frame received for 3\nI0111 17:03:07.423587 1376 log.go:181] (0x2bbe150) (3) Data frame handling\nI0111 17:03:07.423784 1376 log.go:181] (0x2bbe000) Data frame received for 1\nI0111 17:03:07.424011 1376 log.go:181] (0x2bbe070) (1) Data frame handling\nI0111 17:03:07.424290 1376 log.go:181] (0x2bbe000) Data frame received for 5\nI0111 17:03:07.424500 1376 log.go:181] (0x25c21c0) (5) Data frame handling\nI0111 17:03:07.426389 1376 log.go:181] (0x2bbe070) (1) Data frame sent\nI0111 17:03:07.427129 1376 log.go:181] (0x25c21c0) (5) Data frame sent\nI0111 17:03:07.427384 1376 log.go:181] (0x2bbe000) Data frame received for 5\nI0111 17:03:07.427548 1376 log.go:181] (0x25c21c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.2.48 80\nConnection to 10.96.2.48 80 port [tcp/http] succeeded!\nI0111 17:03:07.429552 1376 log.go:181] (0x2bbe000) (0x2bbe070) Stream removed, broadcasting: 1\nI0111 17:03:07.433047 1376 log.go:181] (0x2bbe000) (0x2bbe070) Stream removed, broadcasting: 1\nI0111 17:03:07.433382 1376 log.go:181] (0x2bbe000) (0x2bbe150) Stream removed, broadcasting: 3\nI0111 17:03:07.434013 1376 log.go:181] (0x2bbe000) Go away received\nI0111 17:03:07.435132 1376 log.go:181] (0x2bbe000) (0x25c21c0) Stream removed, broadcasting: 5\n" Jan 11 17:03:07.444: INFO: stdout: "" Jan 11 17:03:07.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8430 exec execpod-affinity55vm8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32339' Jan 11 17:03:08.975: INFO: stderr: "I0111 17:03:08.826405 1396 log.go:181] (0x28b01c0) (0x28b02a0) Create stream\nI0111 17:03:08.828419 1396 log.go:181] (0x28b01c0) (0x28b02a0) Stream added, broadcasting: 1\nI0111 17:03:08.848686 1396 log.go:181] (0x28b01c0) Reply frame received for 1\nI0111 17:03:08.849279 1396 log.go:181] (0x28b01c0) (0x2f98150) Create stream\nI0111 17:03:08.849353 1396 log.go:181] (0x28b01c0) (0x2f98150) Stream added, broadcasting: 3\nI0111 17:03:08.850732 1396 log.go:181] (0x28b01c0) Reply frame received for 3\nI0111 17:03:08.850964 1396 log.go:181] (0x28b01c0) (0x28b00e0) Create stream\nI0111 17:03:08.851024 1396 log.go:181] (0x28b01c0) (0x28b00e0) Stream added, broadcasting: 5\nI0111 17:03:08.851998 1396 log.go:181] (0x28b01c0) Reply frame received for 5\nI0111 17:03:08.952530 1396 log.go:181] (0x28b01c0) Data frame received for 5\nI0111 17:03:08.952764 1396 log.go:181] (0x28b01c0) Data frame received for 1\nI0111 17:03:08.953177 1396 log.go:181] (0x28b01c0) Data frame received for 3\nI0111 17:03:08.953344 1396 log.go:181] (0x28b00e0) (5) Data frame handling\nI0111 17:03:08.953511 1396 log.go:181] (0x28b02a0) (1) Data frame handling\nI0111 17:03:08.953824 1396 log.go:181] (0x2f98150) (3) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32339\nConnection to 172.18.0.13 32339 port [tcp/32339] succeeded!\nI0111 17:03:08.955812 1396 log.go:181] (0x28b00e0) (5) Data frame sent\nI0111 17:03:08.955980 1396 log.go:181] (0x28b02a0) (1) Data frame sent\nI0111 17:03:08.956623 1396 log.go:181] (0x28b01c0) Data frame received for 5\nI0111 17:03:08.956739 1396 log.go:181] (0x28b00e0) (5) Data frame handling\nI0111 17:03:08.958132 1396 log.go:181] (0x28b01c0) (0x28b02a0) Stream removed, broadcasting: 1\nI0111 17:03:08.959629 1396 log.go:181] (0x28b01c0) Go away received\nI0111 17:03:08.963784 1396 log.go:181] (0x28b01c0) (0x28b02a0) Stream removed, broadcasting: 1\nI0111 17:03:08.964071 1396 log.go:181] (0x28b01c0) (0x2f98150) Stream removed, broadcasting: 3\nI0111 17:03:08.964281 1396 log.go:181] (0x28b01c0) (0x28b00e0) Stream removed, broadcasting: 5\n" Jan 11 17:03:08.976: INFO: stdout: "" Jan 11 17:03:08.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8430 exec execpod-affinity55vm8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32339' Jan 11 17:03:10.465: INFO: stderr: "I0111 17:03:10.351917 1416 log.go:181] (0x26780e0) (0x26782a0) Create stream\nI0111 17:03:10.355587 1416 log.go:181] (0x26780e0) (0x26782a0) Stream added, broadcasting: 1\nI0111 17:03:10.374643 1416 log.go:181] (0x26780e0) Reply frame received for 1\nI0111 17:03:10.375064 1416 log.go:181] (0x26780e0) (0x2a38070) Create stream\nI0111 17:03:10.375132 1416 log.go:181] (0x26780e0) (0x2a38070) Stream added, broadcasting: 3\nI0111 17:03:10.376221 1416 log.go:181] (0x26780e0) Reply frame received for 3\nI0111 17:03:10.376433 1416 log.go:181] (0x26780e0) (0x2a382a0) Create stream\nI0111 17:03:10.376490 1416 log.go:181] (0x26780e0) (0x2a382a0) Stream added, broadcasting: 5\nI0111 17:03:10.377622 1416 log.go:181] (0x26780e0) Reply frame received for 5\nI0111 17:03:10.446115 1416 log.go:181] (0x26780e0) Data frame received for 5\nI0111 17:03:10.446455 1416 log.go:181] (0x26780e0) Data frame received for 1\nI0111 17:03:10.446871 1416 log.go:181] (0x26782a0) (1) Data frame handling\nI0111 17:03:10.447486 1416 log.go:181] (0x26780e0) Data frame received for 3\nI0111 17:03:10.447737 1416 log.go:181] (0x2a38070) (3) Data frame handling\nI0111 17:03:10.447982 1416 log.go:181] (0x2a382a0) (5) Data frame handling\nI0111 17:03:10.449192 1416 log.go:181] (0x2a382a0) (5) Data frame sent\nI0111 17:03:10.449371 1416 log.go:181] (0x26782a0) (1) Data frame sent\nI0111 17:03:10.449873 1416 log.go:181] (0x26780e0) Data frame received for 5\nI0111 17:03:10.449987 1416 log.go:181] (0x2a382a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32339\nConnection to 172.18.0.12 32339 port [tcp/32339] succeeded!\nI0111 17:03:10.452346 1416 log.go:181] (0x26780e0) (0x26782a0) Stream removed, broadcasting: 1\nI0111 17:03:10.453375 1416 log.go:181] (0x26780e0) Go away received\nI0111 17:03:10.456413 1416 log.go:181] (0x26780e0) (0x26782a0) Stream removed, broadcasting: 1\nI0111 17:03:10.456642 1416 log.go:181] (0x26780e0) (0x2a38070) Stream removed, broadcasting: 3\nI0111 17:03:10.457031 1416 log.go:181] (0x26780e0) (0x2a382a0) Stream removed, broadcasting: 5\n" Jan 11 17:03:10.466: INFO: stdout: "" Jan 11 17:03:10.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8430 exec execpod-affinity55vm8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:32339/ ; done' Jan 11 17:03:12.147: INFO: stderr: "I0111 17:03:11.932420 1436 log.go:181] (0x2ad0000) (0x2ad0070) Create stream\nI0111 17:03:11.936959 1436 log.go:181] (0x2ad0000) (0x2ad0070) Stream added, broadcasting: 1\nI0111 17:03:11.946977 1436 log.go:181] (0x2ad0000) Reply frame received for 1\nI0111 17:03:11.947506 1436 log.go:181] (0x2ad0000) (0x2db8070) Create stream\nI0111 17:03:11.947592 1436 log.go:181] (0x2ad0000) (0x2db8070) Stream added, broadcasting: 3\nI0111 17:03:11.949465 1436 log.go:181] (0x2ad0000) Reply frame received for 3\nI0111 17:03:11.949688 1436 log.go:181] (0x2ad0000) (0x2db82a0) Create stream\nI0111 17:03:11.949752 1436 log.go:181] (0x2ad0000) (0x2db82a0) Stream added, broadcasting: 5\nI0111 17:03:11.951518 1436 log.go:181] (0x2ad0000) Reply frame received for 5\nI0111 17:03:12.034832 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.035111 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.035377 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.035568 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.035790 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.035998 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.038383 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.038546 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.038744 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.038904 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.039053 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.039163 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.039257 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.039340 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.039449 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.042122 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.042270 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.042397 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.042749 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.042902 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.043010 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.043159 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.043255 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.043365 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.050231 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.050340 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.050455 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.050899 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.051023 1436 log.go:181] (0x2db82a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.051176 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.051318 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.051441 1436 log.go:181] (0x2db82a0) (5) Data frame sent\nI0111 17:03:12.051548 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.055946 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.056046 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.056160 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.056469 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.056613 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.056721 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.056822 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.057077 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.057171 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.061928 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.062022 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.062118 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.062633 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.062756 1436 log.go:181] (0x2db82a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.062862 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.062993 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.063108 1436 log.go:181] (0x2db82a0) (5) Data frame sent\nI0111 17:03:12.063199 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.067765 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.067911 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.068042 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.073019 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.073229 1436 log.go:181] (0x2db82a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.073340 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.073736 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.073874 1436 log.go:181] (0x2db82a0) (5) Data frame sent\nI0111 17:03:12.073965 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.078394 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.078524 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.078666 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.078773 1436 log.go:181] (0x2db82a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.078851 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.078923 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.078994 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.079081 1436 log.go:181] (0x2db82a0) (5) Data frame sent\nI0111 17:03:12.079158 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.084369 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.084465 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.084546 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.084931 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.085040 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.085109 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.085181 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.085244 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.085313 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.089901 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.089992 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.090098 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.090510 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.090659 1436 log.go:181] (0x2db82a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.090760 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.090890 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.090982 1436 log.go:181] (0x2db82a0) (5) Data frame sent\nI0111 17:03:12.091096 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.094968 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.095052 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.095126 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.095820 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.095907 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.095974 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.096055 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.096130 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.096213 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0111 17:03:12.096275 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.096351 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.096474 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n 2 http://172.18.0.13:32339/\nI0111 17:03:12.099716 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.099835 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.099946 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.100404 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.100477 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.100558 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.100626 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.100686 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.100758 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.105515 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.105585 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.105661 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.106401 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.106497 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.106661 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.106830 1436 log.go:181] (0x2db82a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.106992 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.107171 1436 log.go:181] (0x2db82a0) (5) Data frame sent\nI0111 17:03:12.111835 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.111952 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.112133 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.112697 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.112799 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.112996 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0111 17:03:12.113119 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.113266 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.113442 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.113576 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.113665 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n http://172.18.0.13:32339/\nI0111 17:03:12.113746 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.117108 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.117208 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.117286 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.117776 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.117920 1436 log.go:181] (0x2db82a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.118028 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.118138 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.118266 1436 log.go:181] (0x2db82a0) (5) Data frame sent\nI0111 17:03:12.118390 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.125127 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.125247 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.125332 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.126291 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.126412 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.126515 1436 log.go:181] (0x2db82a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:12.126604 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.126684 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.126782 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.130022 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.130182 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.130301 1436 log.go:181] (0x2db8070) (3) Data frame sent\nI0111 17:03:12.130455 1436 log.go:181] (0x2ad0000) Data frame received for 3\nI0111 17:03:12.130556 1436 log.go:181] (0x2db8070) (3) Data frame handling\nI0111 17:03:12.130798 1436 log.go:181] (0x2ad0000) Data frame received for 5\nI0111 17:03:12.130869 1436 log.go:181] (0x2db82a0) (5) Data frame handling\nI0111 17:03:12.134063 1436 log.go:181] (0x2ad0000) Data frame received for 1\nI0111 17:03:12.134158 1436 log.go:181] (0x2ad0070) (1) Data frame handling\nI0111 17:03:12.134276 1436 log.go:181] (0x2ad0070) (1) Data frame sent\nI0111 17:03:12.134766 1436 log.go:181] (0x2ad0000) (0x2ad0070) Stream removed, broadcasting: 1\nI0111 17:03:12.136991 1436 log.go:181] (0x2ad0000) Go away received\nI0111 17:03:12.139167 1436 log.go:181] (0x2ad0000) (0x2ad0070) Stream removed, broadcasting: 1\nI0111 17:03:12.139316 1436 log.go:181] (0x2ad0000) (0x2db8070) Stream removed, broadcasting: 3\nI0111 17:03:12.139442 1436 log.go:181] (0x2ad0000) (0x2db82a0) Stream removed, broadcasting: 5\n" Jan 11 17:03:12.151: INFO: stdout: "\naffinity-nodeport-transition-rqm9g\naffinity-nodeport-transition-rqm9g\naffinity-nodeport-transition-rqm9g\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-9knmm\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-rqm9g\naffinity-nodeport-transition-9knmm\naffinity-nodeport-transition-9knmm\naffinity-nodeport-transition-9knmm\naffinity-nodeport-transition-9knmm\naffinity-nodeport-transition-9knmm\naffinity-nodeport-transition-rqm9g\naffinity-nodeport-transition-rqm9g\naffinity-nodeport-transition-bw649" Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-rqm9g Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-rqm9g Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-rqm9g Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-9knmm Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-rqm9g Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-9knmm Jan 11 17:03:12.151: INFO: Received response from host: affinity-nodeport-transition-9knmm Jan 11 17:03:12.152: INFO: Received response from host: affinity-nodeport-transition-9knmm Jan 11 17:03:12.152: INFO: Received response from host: affinity-nodeport-transition-9knmm Jan 11 17:03:12.152: INFO: Received response from host: affinity-nodeport-transition-9knmm Jan 11 17:03:12.152: INFO: Received response from host: affinity-nodeport-transition-rqm9g Jan 11 17:03:12.152: INFO: Received response from host: affinity-nodeport-transition-rqm9g Jan 11 17:03:12.152: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:12.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8430 exec execpod-affinity55vm8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:32339/ ; done' Jan 11 17:03:13.769: INFO: stderr: "I0111 17:03:13.565842 1456 log.go:181] (0x279aa80) (0x279b3b0) Create stream\nI0111 17:03:13.568694 1456 log.go:181] (0x279aa80) (0x279b3b0) Stream added, broadcasting: 1\nI0111 17:03:13.590634 1456 log.go:181] (0x279aa80) Reply frame received for 1\nI0111 17:03:13.591181 1456 log.go:181] (0x279aa80) (0x27fc0e0) Create stream\nI0111 17:03:13.591265 1456 log.go:181] (0x279aa80) (0x27fc0e0) Stream added, broadcasting: 3\nI0111 17:03:13.592674 1456 log.go:181] (0x279aa80) Reply frame received for 3\nI0111 17:03:13.592955 1456 log.go:181] (0x279aa80) (0x2ea8070) Create stream\nI0111 17:03:13.593024 1456 log.go:181] (0x279aa80) (0x2ea8070) Stream added, broadcasting: 5\nI0111 17:03:13.594072 1456 log.go:181] (0x279aa80) Reply frame received for 5\nI0111 17:03:13.669089 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.669324 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.669529 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.669646 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.669814 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.670032 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.670819 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.670888 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.670957 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.671044 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.671125 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.671200 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.671261 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.671312 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.671378 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.675655 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.675723 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.675808 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.676268 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.676366 1456 log.go:181] (0x2ea8070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.676460 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.676602 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.676693 1456 log.go:181] (0x2ea8070) (5) Data frame sent\nI0111 17:03:13.676778 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.679578 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.679665 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.679764 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.680415 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.680529 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.680607 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.680727 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.680818 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.680979 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.685206 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.685313 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.685450 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.685721 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.685797 1456 log.go:181] (0x2ea8070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.685877 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.685978 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.686051 1456 log.go:181] (0x2ea8070) (5) Data frame sent\nI0111 17:03:13.686117 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.690183 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.690294 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.690414 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.691194 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.691268 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.691334 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.691394 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.691449 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.691523 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.696824 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.696996 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.697127 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.697772 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.697884 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.697976 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.698060 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.698137 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.698232 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.701089 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.701208 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.701356 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.701923 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.702108 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.702324 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.702501 1456 log.go:181] (0x2ea8070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.702655 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.702841 1456 log.go:181] (0x2ea8070) (5) Data frame sent\nI0111 17:03:13.705318 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.705487 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.705609 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.706221 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.706359 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.706461 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.706574 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.706709 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.706823 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.710055 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.710206 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.710377 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.710929 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.711045 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.711145 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.711243 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.711330 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.711434 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.716415 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.716580 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.716702 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.717067 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.717161 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.717258 1456 log.go:181] (0x2ea8070) (5) Data frame sent\nI0111 17:03:13.717355 1456 log.go:181] (0x279aa80) Data frame received for 5\n+ echo\n+ curl -q -sI0111 17:03:13.717432 1456 log.go:181] (0x2ea8070) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.717553 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.717687 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.717788 1456 log.go:181] (0x2ea8070) (5) Data frame sent\nI0111 17:03:13.717936 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.723162 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.723267 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.723381 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.723631 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.723730 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.723855 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.723923 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.724018 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.724085 1456 log.go:181] (0x2ea8070) (5) Data frame sent\nI0111 17:03:13.729749 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.729844 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.729941 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.730482 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.730583 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.730673 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.730770 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.730852 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.730977 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.733905 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.734026 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.734127 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.734520 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.734760 1456 log.go:181] (0x2ea8070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.734917 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.735479 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.735673 1456 log.go:181] (0x2ea8070) (5) Data frame sent\nI0111 17:03:13.735842 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.740077 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.740214 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.740411 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.740628 1456 log.go:181] (0x2ea8070) (5) Data frame handling\n+ echo\nI0111 17:03:13.740722 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.740938 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.741061 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.741163 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.741256 1456 log.go:181] (0x2ea8070) (5) Data frame sent\nI0111 17:03:13.741374 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.741493 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.741627 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.743872 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.744024 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.744185 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.744781 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.745035 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.745150 1456 log.go:181] (0x2ea8070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32339/\nI0111 17:03:13.745302 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.745408 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.745547 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.750460 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.750580 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.750691 1456 log.go:181] (0x27fc0e0) (3) Data frame sent\nI0111 17:03:13.751007 1456 log.go:181] (0x279aa80) Data frame received for 5\nI0111 17:03:13.751143 1456 log.go:181] (0x2ea8070) (5) Data frame handling\nI0111 17:03:13.751305 1456 log.go:181] (0x279aa80) Data frame received for 3\nI0111 17:03:13.751453 1456 log.go:181] (0x27fc0e0) (3) Data frame handling\nI0111 17:03:13.755316 1456 log.go:181] (0x279aa80) Data frame received for 1\nI0111 17:03:13.755418 1456 log.go:181] (0x279b3b0) (1) Data frame handling\nI0111 17:03:13.755518 1456 log.go:181] (0x279b3b0) (1) Data frame sent\nI0111 17:03:13.755851 1456 log.go:181] (0x279aa80) (0x279b3b0) Stream removed, broadcasting: 1\nI0111 17:03:13.757298 1456 log.go:181] (0x279aa80) Go away received\nI0111 17:03:13.759655 1456 log.go:181] (0x279aa80) (0x279b3b0) Stream removed, broadcasting: 1\nI0111 17:03:13.759934 1456 log.go:181] (0x279aa80) (0x27fc0e0) Stream removed, broadcasting: 3\nI0111 17:03:13.760177 1456 log.go:181] (0x279aa80) (0x2ea8070) Stream removed, broadcasting: 5\n" Jan 11 17:03:13.780: INFO: stdout: "\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649\naffinity-nodeport-transition-bw649" Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.781: INFO: Received response from host: affinity-nodeport-transition-bw649 Jan 11 17:03:13.782: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-8430, will wait for the garbage collector to delete the pods Jan 11 17:03:13.903: INFO: Deleting ReplicationController affinity-nodeport-transition took: 11.940831ms Jan 11 17:03:14.004: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.862841ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:04:20.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8430" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:87.100 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":135,"skipped":2343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:04:20.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9397 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet Jan 11 17:04:20.490: INFO: Found 0 stateful pods, waiting for 3 Jan 11 17:04:30.532: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:04:30.532: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:04:30.533: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 11 17:04:40.501: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:04:40.501: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:04:40.501: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:04:40.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9397 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:04:42.183: INFO: stderr: "I0111 17:04:42.040515 1476 log.go:181] (0x31960e0) (0x31961c0) Create stream\nI0111 17:04:42.045862 1476 log.go:181] (0x31960e0) (0x31961c0) Stream added, broadcasting: 1\nI0111 17:04:42.063636 1476 log.go:181] (0x31960e0) Reply frame received for 1\nI0111 17:04:42.064617 1476 log.go:181] (0x31960e0) (0x3196230) Create stream\nI0111 17:04:42.064686 1476 log.go:181] (0x31960e0) (0x3196230) Stream added, broadcasting: 3\nI0111 17:04:42.065962 1476 log.go:181] (0x31960e0) Reply frame received for 3\nI0111 17:04:42.066198 1476 log.go:181] (0x31960e0) (0x2ab0150) Create stream\nI0111 17:04:42.066286 1476 log.go:181] (0x31960e0) (0x2ab0150) Stream added, broadcasting: 5\nI0111 17:04:42.067408 1476 log.go:181] (0x31960e0) Reply frame received for 5\nI0111 17:04:42.123213 1476 log.go:181] (0x31960e0) Data frame received for 5\nI0111 17:04:42.123548 1476 log.go:181] (0x2ab0150) (5) Data frame handling\nI0111 17:04:42.124236 1476 log.go:181] (0x2ab0150) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:04:42.162876 1476 log.go:181] (0x31960e0) Data frame received for 3\nI0111 17:04:42.163060 1476 log.go:181] (0x3196230) (3) Data frame handling\nI0111 17:04:42.163290 1476 log.go:181] (0x31960e0) Data frame received for 5\nI0111 17:04:42.163491 1476 log.go:181] (0x2ab0150) (5) Data frame handling\nI0111 17:04:42.163814 1476 log.go:181] (0x3196230) (3) Data frame sent\nI0111 17:04:42.164019 1476 log.go:181] (0x31960e0) Data frame received for 3\nI0111 17:04:42.164151 1476 log.go:181] (0x3196230) (3) Data frame handling\nI0111 17:04:42.165863 1476 log.go:181] (0x31960e0) Data frame received for 1\nI0111 17:04:42.166045 1476 log.go:181] (0x31961c0) (1) Data frame handling\nI0111 17:04:42.166186 1476 log.go:181] (0x31961c0) (1) Data frame sent\nI0111 17:04:42.167209 1476 log.go:181] (0x31960e0) (0x31961c0) Stream removed, broadcasting: 1\nI0111 17:04:42.171116 1476 log.go:181] (0x31960e0) Go away received\nI0111 17:04:42.174381 1476 log.go:181] (0x31960e0) (0x31961c0) Stream removed, broadcasting: 1\nI0111 17:04:42.174775 1476 log.go:181] (0x31960e0) (0x3196230) Stream removed, broadcasting: 3\nI0111 17:04:42.175039 1476 log.go:181] (0x31960e0) (0x2ab0150) Stream removed, broadcasting: 5\n" Jan 11 17:04:42.184: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:04:42.184: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 11 17:04:52.238: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 11 17:05:02.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9397 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:05:03.895: INFO: stderr: "I0111 17:05:03.757754 1496 log.go:181] (0x2e40000) (0x2e40070) Create stream\nI0111 17:05:03.760805 1496 log.go:181] (0x2e40000) (0x2e40070) Stream added, broadcasting: 1\nI0111 17:05:03.777630 1496 log.go:181] (0x2e40000) Reply frame received for 1\nI0111 17:05:03.779713 1496 log.go:181] (0x2e40000) (0x284b180) Create stream\nI0111 17:05:03.780155 1496 log.go:181] (0x2e40000) (0x284b180) Stream added, broadcasting: 3\nI0111 17:05:03.781986 1496 log.go:181] (0x2e40000) Reply frame received for 3\nI0111 17:05:03.782270 1496 log.go:181] (0x2e40000) (0x28e8460) Create stream\nI0111 17:05:03.782369 1496 log.go:181] (0x2e40000) (0x28e8460) Stream added, broadcasting: 5\nI0111 17:05:03.783566 1496 log.go:181] (0x2e40000) Reply frame received for 5\nI0111 17:05:03.874874 1496 log.go:181] (0x2e40000) Data frame received for 3\nI0111 17:05:03.875293 1496 log.go:181] (0x2e40000) Data frame received for 5\nI0111 17:05:03.875637 1496 log.go:181] (0x28e8460) (5) Data frame handling\nI0111 17:05:03.876227 1496 log.go:181] (0x284b180) (3) Data frame handling\nI0111 17:05:03.876549 1496 log.go:181] (0x2e40000) Data frame received for 1\nI0111 17:05:03.876771 1496 log.go:181] (0x2e40070) (1) Data frame handling\nI0111 17:05:03.877251 1496 log.go:181] (0x2e40070) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0111 17:05:03.877564 1496 log.go:181] (0x28e8460) (5) Data frame sent\nI0111 17:05:03.877730 1496 log.go:181] (0x2e40000) Data frame received for 5\nI0111 17:05:03.877896 1496 log.go:181] (0x28e8460) (5) Data frame handling\nI0111 17:05:03.878225 1496 log.go:181] (0x284b180) (3) Data frame sent\nI0111 17:05:03.878438 1496 log.go:181] (0x2e40000) Data frame received for 3\nI0111 17:05:03.878596 1496 log.go:181] (0x284b180) (3) Data frame handling\nI0111 17:05:03.880252 1496 log.go:181] (0x2e40000) (0x2e40070) Stream removed, broadcasting: 1\nI0111 17:05:03.882621 1496 log.go:181] (0x2e40000) Go away received\nI0111 17:05:03.884699 1496 log.go:181] (0x2e40000) (0x2e40070) Stream removed, broadcasting: 1\nI0111 17:05:03.885086 1496 log.go:181] (0x2e40000) (0x284b180) Stream removed, broadcasting: 3\nI0111 17:05:03.885579 1496 log.go:181] (0x2e40000) (0x28e8460) Stream removed, broadcasting: 5\n" Jan 11 17:05:03.896: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 17:05:03.896: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 17:05:13.934: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:05:13.935: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 17:05:13.935: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 17:05:23.950: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:05:23.951: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 17:05:33.950: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:05:33.950: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 17:05:43.951: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:05:43.951: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 17:05:53.950: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:05:53.950: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 17:06:03.952: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:06:03.952: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 11 17:06:13.953: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:06:13.954: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 11 17:06:23.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9397 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:06:25.494: INFO: stderr: "I0111 17:06:25.318705 1516 log.go:181] (0x251ed20) (0x251fb90) Create stream\nI0111 17:06:25.322172 1516 log.go:181] (0x251ed20) (0x251fb90) Stream added, broadcasting: 1\nI0111 17:06:25.340283 1516 log.go:181] (0x251ed20) Reply frame received for 1\nI0111 17:06:25.340750 1516 log.go:181] (0x251ed20) (0x251fc70) Create stream\nI0111 17:06:25.340821 1516 log.go:181] (0x251ed20) (0x251fc70) Stream added, broadcasting: 3\nI0111 17:06:25.342133 1516 log.go:181] (0x251ed20) Reply frame received for 3\nI0111 17:06:25.342355 1516 log.go:181] (0x251ed20) (0x2896150) Create stream\nI0111 17:06:25.342441 1516 log.go:181] (0x251ed20) (0x2896150) Stream added, broadcasting: 5\nI0111 17:06:25.343371 1516 log.go:181] (0x251ed20) Reply frame received for 5\nI0111 17:06:25.434454 1516 log.go:181] (0x251ed20) Data frame received for 5\nI0111 17:06:25.434636 1516 log.go:181] (0x2896150) (5) Data frame handling\nI0111 17:06:25.434913 1516 log.go:181] (0x2896150) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:06:25.475753 1516 log.go:181] (0x251ed20) Data frame received for 3\nI0111 17:06:25.476028 1516 log.go:181] (0x251fc70) (3) Data frame handling\nI0111 17:06:25.476280 1516 log.go:181] (0x251fc70) (3) Data frame sent\nI0111 17:06:25.476456 1516 log.go:181] (0x251ed20) Data frame received for 3\nI0111 17:06:25.476662 1516 log.go:181] (0x251fc70) (3) Data frame handling\nI0111 17:06:25.477051 1516 log.go:181] (0x251ed20) Data frame received for 5\nI0111 17:06:25.477270 1516 log.go:181] (0x2896150) (5) Data frame handling\nI0111 17:06:25.478465 1516 log.go:181] (0x251ed20) Data frame received for 1\nI0111 17:06:25.478676 1516 log.go:181] (0x251fb90) (1) Data frame handling\nI0111 17:06:25.478916 1516 log.go:181] (0x251fb90) (1) Data frame sent\nI0111 17:06:25.480592 1516 log.go:181] (0x251ed20) (0x251fb90) Stream removed, broadcasting: 1\nI0111 17:06:25.482529 1516 log.go:181] (0x251ed20) Go away received\nI0111 17:06:25.485822 1516 log.go:181] (0x251ed20) (0x251fb90) Stream removed, broadcasting: 1\nI0111 17:06:25.486043 1516 log.go:181] (0x251ed20) (0x251fc70) Stream removed, broadcasting: 3\nI0111 17:06:25.486221 1516 log.go:181] (0x251ed20) (0x2896150) Stream removed, broadcasting: 5\n" Jan 11 17:06:25.495: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:06:25.495: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:06:35.565: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 11 17:06:45.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9397 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:06:47.142: INFO: stderr: "I0111 17:06:47.010187 1536 log.go:181] (0x29f59d0) (0x29f5a40) Create stream\nI0111 17:06:47.013075 1536 log.go:181] (0x29f59d0) (0x29f5a40) Stream added, broadcasting: 1\nI0111 17:06:47.024403 1536 log.go:181] (0x29f59d0) Reply frame received for 1\nI0111 17:06:47.033965 1536 log.go:181] (0x29f59d0) (0x2e22000) Create stream\nI0111 17:06:47.034516 1536 log.go:181] (0x29f59d0) (0x2e22000) Stream added, broadcasting: 3\nI0111 17:06:47.039598 1536 log.go:181] (0x29f59d0) Reply frame received for 3\nI0111 17:06:47.039905 1536 log.go:181] (0x29f59d0) (0x2e221c0) Create stream\nI0111 17:06:47.039981 1536 log.go:181] (0x29f59d0) (0x2e221c0) Stream added, broadcasting: 5\nI0111 17:06:47.041250 1536 log.go:181] (0x29f59d0) Reply frame received for 5\nI0111 17:06:47.123643 1536 log.go:181] (0x29f59d0) Data frame received for 5\nI0111 17:06:47.124009 1536 log.go:181] (0x29f59d0) Data frame received for 3\nI0111 17:06:47.124237 1536 log.go:181] (0x2e22000) (3) Data frame handling\nI0111 17:06:47.124466 1536 log.go:181] (0x2e221c0) (5) Data frame handling\nI0111 17:06:47.125026 1536 log.go:181] (0x29f59d0) Data frame received for 1\nI0111 17:06:47.125166 1536 log.go:181] (0x29f5a40) (1) Data frame handling\nI0111 17:06:47.125466 1536 log.go:181] (0x2e22000) (3) Data frame sent\nI0111 17:06:47.125709 1536 log.go:181] (0x29f5a40) (1) Data frame sent\nI0111 17:06:47.125884 1536 log.go:181] (0x29f59d0) Data frame received for 3\nI0111 17:06:47.126005 1536 log.go:181] (0x2e22000) (3) Data frame handling\nI0111 17:06:47.126365 1536 log.go:181] (0x2e221c0) (5) Data frame sent\nI0111 17:06:47.126516 1536 log.go:181] (0x29f59d0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0111 17:06:47.126635 1536 log.go:181] (0x2e221c0) (5) Data frame handling\nI0111 17:06:47.128631 1536 log.go:181] (0x29f59d0) (0x29f5a40) Stream removed, broadcasting: 1\nI0111 17:06:47.130645 1536 log.go:181] (0x29f59d0) Go away received\nI0111 17:06:47.133442 1536 log.go:181] (0x29f59d0) (0x29f5a40) Stream removed, broadcasting: 1\nI0111 17:06:47.133665 1536 log.go:181] (0x29f59d0) (0x2e22000) Stream removed, broadcasting: 3\nI0111 17:06:47.133838 1536 log.go:181] (0x29f59d0) (0x2e221c0) Stream removed, broadcasting: 5\n" Jan 11 17:06:47.143: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 17:06:47.143: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 17:06:57.184: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:06:57.184: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:06:57.184: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:06:57.184: INFO: Waiting for Pod statefulset-9397/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:07.202: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:07:07.202: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:07.202: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:07.202: INFO: Waiting for Pod statefulset-9397/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:17.201: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:07:17.201: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:17.201: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:17.201: INFO: Waiting for Pod statefulset-9397/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:27.197: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:07:27.198: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:27.198: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:27.198: INFO: Waiting for Pod statefulset-9397/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:37.201: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:07:37.202: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:37.202: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:47.205: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:07:47.206: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:47.206: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:57.203: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:07:57.204: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:07:57.204: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:08:07.201: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:08:07.201: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:08:07.201: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:08:17.202: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:08:17.202: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:08:17.202: INFO: Waiting for Pod statefulset-9397/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 11 17:08:27.202: INFO: Waiting for StatefulSet statefulset-9397/ss2 to complete update Jan 11 17:08:27.202: INFO: Waiting for Pod statefulset-9397/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 11 17:08:37.202: INFO: Deleting all statefulset in ns statefulset-9397 Jan 11 17:08:37.207: INFO: Scaling statefulset ss2 to 0 Jan 11 17:09:57.266: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 17:09:57.272: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:09:57.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9397" for this suite. • [SLOW TEST:337.090 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":309,"completed":136,"skipped":2402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:09:57.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-17a0da70-6431-452c-923c-34a8a867dd1a in namespace container-probe-4260 Jan 11 17:10:01.465: INFO: Started pod liveness-17a0da70-6431-452c-923c-34a8a867dd1a in namespace container-probe-4260 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 17:10:01.470: INFO: Initial restart count of pod liveness-17a0da70-6431-452c-923c-34a8a867dd1a is 0 Jan 11 17:10:15.545: INFO: Restart count of pod container-probe-4260/liveness-17a0da70-6431-452c-923c-34a8a867dd1a is now 1 (14.074983676s elapsed) Jan 11 17:10:35.642: INFO: Restart count of pod container-probe-4260/liveness-17a0da70-6431-452c-923c-34a8a867dd1a is now 2 (34.171796444s elapsed) Jan 11 17:10:55.723: INFO: Restart count of pod container-probe-4260/liveness-17a0da70-6431-452c-923c-34a8a867dd1a is now 3 (54.252559917s elapsed) Jan 11 17:11:15.798: INFO: Restart count of pod container-probe-4260/liveness-17a0da70-6431-452c-923c-34a8a867dd1a is now 4 (1m14.327486474s elapsed) Jan 11 17:12:18.322: INFO: Restart count of pod container-probe-4260/liveness-17a0da70-6431-452c-923c-34a8a867dd1a is now 5 (2m16.851563012s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:12:18.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4260" for this suite. • [SLOW TEST:141.071 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":309,"completed":137,"skipped":2438,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:12:18.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-9986/configmap-test-5ceeb79d-7ac2-4dfb-968d-16d644347e31 STEP: Creating a pod to test consume configMaps Jan 11 17:12:18.817: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ad2a0a2-a5fa-440b-85cd-9a425694ac2e" in namespace "configmap-9986" to be "Succeeded or Failed" Jan 11 17:12:18.840: INFO: Pod "pod-configmaps-1ad2a0a2-a5fa-440b-85cd-9a425694ac2e": Phase="Pending", Reason="", readiness=false. Elapsed: 23.675416ms Jan 11 17:12:20.876: INFO: Pod "pod-configmaps-1ad2a0a2-a5fa-440b-85cd-9a425694ac2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058760067s Jan 11 17:12:22.966: INFO: Pod "pod-configmaps-1ad2a0a2-a5fa-440b-85cd-9a425694ac2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148762641s STEP: Saw pod success Jan 11 17:12:22.966: INFO: Pod "pod-configmaps-1ad2a0a2-a5fa-440b-85cd-9a425694ac2e" satisfied condition "Succeeded or Failed" Jan 11 17:12:22.984: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-1ad2a0a2-a5fa-440b-85cd-9a425694ac2e container env-test: STEP: delete the pod Jan 11 17:12:23.191: INFO: Waiting for pod pod-configmaps-1ad2a0a2-a5fa-440b-85cd-9a425694ac2e to disappear Jan 11 17:12:23.204: INFO: Pod pod-configmaps-1ad2a0a2-a5fa-440b-85cd-9a425694ac2e no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:12:23.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9986" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":309,"completed":138,"skipped":2449,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:12:23.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override all Jan 11 17:12:23.328: INFO: Waiting up to 5m0s for pod "client-containers-85ed6fd4-cbd5-4cd8-8a11-080c2e881bed" in namespace "containers-5281" to be "Succeeded or Failed" Jan 11 17:12:23.392: INFO: Pod "client-containers-85ed6fd4-cbd5-4cd8-8a11-080c2e881bed": Phase="Pending", Reason="", readiness=false. Elapsed: 63.976641ms Jan 11 17:12:25.401: INFO: Pod "client-containers-85ed6fd4-cbd5-4cd8-8a11-080c2e881bed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073416074s Jan 11 17:12:27.411: INFO: Pod "client-containers-85ed6fd4-cbd5-4cd8-8a11-080c2e881bed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082950322s STEP: Saw pod success Jan 11 17:12:27.411: INFO: Pod "client-containers-85ed6fd4-cbd5-4cd8-8a11-080c2e881bed" satisfied condition "Succeeded or Failed" Jan 11 17:12:27.417: INFO: Trying to get logs from node leguer-worker pod client-containers-85ed6fd4-cbd5-4cd8-8a11-080c2e881bed container agnhost-container: STEP: delete the pod Jan 11 17:12:27.482: INFO: Waiting for pod client-containers-85ed6fd4-cbd5-4cd8-8a11-080c2e881bed to disappear Jan 11 17:12:27.489: INFO: Pod client-containers-85ed6fd4-cbd5-4cd8-8a11-080c2e881bed no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:12:27.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5281" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":309,"completed":139,"skipped":2471,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:12:27.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override arguments Jan 11 17:12:27.603: INFO: Waiting up to 5m0s for pod "client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce" in namespace "containers-9575" to be "Succeeded or Failed" Jan 11 17:12:27.610: INFO: Pod "client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce": Phase="Pending", Reason="", readiness=false. Elapsed: 7.040773ms Jan 11 17:12:29.620: INFO: Pod "client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016876558s Jan 11 17:12:31.632: INFO: Pod "client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce": Phase="Running", Reason="", readiness=true. Elapsed: 4.028863729s Jan 11 17:12:33.640: INFO: Pod "client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036700124s STEP: Saw pod success Jan 11 17:12:33.640: INFO: Pod "client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce" satisfied condition "Succeeded or Failed" Jan 11 17:12:33.646: INFO: Trying to get logs from node leguer-worker2 pod client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce container agnhost-container: STEP: delete the pod Jan 11 17:12:33.682: INFO: Waiting for pod client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce to disappear Jan 11 17:12:33.707: INFO: Pod client-containers-d88ff3df-59e2-4db9-8a6b-bb0f627c25ce no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:12:33.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9575" for this suite. • [SLOW TEST:6.202 seconds] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":309,"completed":140,"skipped":2484,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:12:33.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-4cc43d1e-c4f3-473e-8d19-daf134b275c5 STEP: Creating a pod to test consume secrets Jan 11 17:12:33.835: INFO: Waiting up to 5m0s for pod "pod-secrets-5dd8501c-d84c-4116-919e-88aa19cce72b" in namespace "secrets-5293" to be "Succeeded or Failed" Jan 11 17:12:33.853: INFO: Pod "pod-secrets-5dd8501c-d84c-4116-919e-88aa19cce72b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.216061ms Jan 11 17:12:35.860: INFO: Pod "pod-secrets-5dd8501c-d84c-4116-919e-88aa19cce72b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024871163s Jan 11 17:12:37.868: INFO: Pod "pod-secrets-5dd8501c-d84c-4116-919e-88aa19cce72b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03249923s STEP: Saw pod success Jan 11 17:12:37.868: INFO: Pod "pod-secrets-5dd8501c-d84c-4116-919e-88aa19cce72b" satisfied condition "Succeeded or Failed" Jan 11 17:12:37.873: INFO: Trying to get logs from node leguer-worker pod pod-secrets-5dd8501c-d84c-4116-919e-88aa19cce72b container secret-volume-test: STEP: delete the pod Jan 11 17:12:38.639: INFO: Waiting for pod pod-secrets-5dd8501c-d84c-4116-919e-88aa19cce72b to disappear Jan 11 17:12:38.660: INFO: Pod pod-secrets-5dd8501c-d84c-4116-919e-88aa19cce72b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:12:38.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5293" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":141,"skipped":2507,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:12:38.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 11 17:12:38.847: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 17:12:38.910: INFO: Waiting for terminating namespaces to be deleted... Jan 11 17:12:38.916: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 11 17:12:38.935: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.935: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 17:12:38.935: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.935: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 17:12:38.935: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.935: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 17:12:38.935: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.935: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 17:12:38.935: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.935: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 17:12:38.935: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.935: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 17:12:38.935: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.935: INFO: Container chaos-mesh ready: true, restart count 0 Jan 11 17:12:38.935: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.936: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 17:12:38.936: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.936: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 17:12:38.936: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.936: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 17:12:38.936: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 11 17:12:38.967: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.967: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 17:12:38.967: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.967: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 17:12:38.967: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.967: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 17:12:38.967: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.967: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 17:12:38.967: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.967: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 17:12:38.967: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.967: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 17:12:38.967: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.967: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 17:12:38.967: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.967: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 17:12:38.967: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 11 17:12:38.968: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-45f1906e-0c97-4b15-99b4-2197e92a3a3c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.12 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.12 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 11 17:12:59.213: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:12:59.213: INFO: >>> kubeConfig: /root/.kube/config I0111 17:12:59.337073 10 log.go:181] (0x8f30d20) (0x8f30fc0) Create stream I0111 17:12:59.337320 10 log.go:181] (0x8f30d20) (0x8f30fc0) Stream added, broadcasting: 1 I0111 17:12:59.342538 10 log.go:181] (0x8f30d20) Reply frame received for 1 I0111 17:12:59.342736 10 log.go:181] (0x8f30d20) (0xa0952d0) Create stream I0111 17:12:59.342834 10 log.go:181] (0x8f30d20) (0xa0952d0) Stream added, broadcasting: 3 I0111 17:12:59.344177 10 log.go:181] (0x8f30d20) Reply frame received for 3 I0111 17:12:59.344322 10 log.go:181] (0x8f30d20) (0xc0fa000) Create stream I0111 17:12:59.344384 10 log.go:181] (0x8f30d20) (0xc0fa000) Stream added, broadcasting: 5 I0111 17:12:59.345609 10 log.go:181] (0x8f30d20) Reply frame received for 5 I0111 17:12:59.428295 10 log.go:181] (0x8f30d20) Data frame received for 5 I0111 17:12:59.428546 10 log.go:181] (0xc0fa000) (5) Data frame handling I0111 17:12:59.428807 10 log.go:181] (0xc0fa000) (5) Data frame sent I0111 17:12:59.429170 10 log.go:181] (0x8f30d20) Data frame received for 3 I0111 17:12:59.429369 10 log.go:181] (0xa0952d0) (3) Data frame handling I0111 17:12:59.429563 10 log.go:181] (0xa0952d0) (3) Data frame sent I0111 17:12:59.429747 10 log.go:181] (0x8f30d20) Data frame received for 5 I0111 17:12:59.429948 10 log.go:181] (0xc0fa000) (5) Data frame handling I0111 17:12:59.430121 10 log.go:181] (0x8f30d20) Data frame received for 3 I0111 17:12:59.430245 10 log.go:181] (0xa0952d0) (3) Data frame handling I0111 17:12:59.430404 10 log.go:181] (0xc0fa000) (5) Data frame sent I0111 17:12:59.430699 10 log.go:181] (0x8f30d20) Data frame received for 5 I0111 17:12:59.430884 10 log.go:181] (0xc0fa000) (5) Data frame handling I0111 17:12:59.431073 10 log.go:181] (0xc0fa000) (5) Data frame sent I0111 17:12:59.431212 10 log.go:181] (0x8f30d20) Data frame received for 5 I0111 17:12:59.431315 10 log.go:181] (0xc0fa000) (5) Data frame handling I0111 17:12:59.431445 10 log.go:181] (0x8f30d20) Data frame received for 1 I0111 17:12:59.431588 10 log.go:181] (0x8f30fc0) (1) Data frame handling I0111 17:12:59.431716 10 log.go:181] (0x8f30fc0) (1) Data frame sent I0111 17:12:59.431866 10 log.go:181] (0x8f30d20) (0x8f30fc0) Stream removed, broadcasting: 1 I0111 17:12:59.432057 10 log.go:181] (0xc0fa000) (5) Data frame sent I0111 17:12:59.432198 10 log.go:181] (0x8f30d20) Data frame received for 5 I0111 17:12:59.432308 10 log.go:181] (0xc0fa000) (5) Data frame handling I0111 17:12:59.432438 10 log.go:181] (0xc0fa000) (5) Data frame sent I0111 17:12:59.432545 10 log.go:181] (0x8f30d20) Data frame received for 5 I0111 17:12:59.432680 10 log.go:181] (0xc0fa000) (5) Data frame handling I0111 17:12:59.432911 10 log.go:181] (0x8f30d20) Go away received I0111 17:12:59.433019 10 log.go:181] (0x8f30d20) (0x8f30fc0) Stream removed, broadcasting: 1 I0111 17:12:59.433171 10 log.go:181] (0x8f30d20) (0xa0952d0) Stream removed, broadcasting: 3 I0111 17:12:59.433313 10 log.go:181] (0x8f30d20) (0xc0fa000) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 11 17:12:59.433: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:12:59.433: INFO: >>> kubeConfig: /root/.kube/config I0111 17:12:59.539317 10 log.go:181] (0xac182a0) (0xac18310) Create stream I0111 17:12:59.539465 10 log.go:181] (0xac182a0) (0xac18310) Stream added, broadcasting: 1 I0111 17:12:59.543158 10 log.go:181] (0xac182a0) Reply frame received for 1 I0111 17:12:59.543355 10 log.go:181] (0xac182a0) (0xacfe070) Create stream I0111 17:12:59.543475 10 log.go:181] (0xac182a0) (0xacfe070) Stream added, broadcasting: 3 I0111 17:12:59.544884 10 log.go:181] (0xac182a0) Reply frame received for 3 I0111 17:12:59.544986 10 log.go:181] (0xac182a0) (0xac184d0) Create stream I0111 17:12:59.545039 10 log.go:181] (0xac182a0) (0xac184d0) Stream added, broadcasting: 5 I0111 17:12:59.546494 10 log.go:181] (0xac182a0) Reply frame received for 5 I0111 17:12:59.622773 10 log.go:181] (0xac182a0) Data frame received for 5 I0111 17:12:59.623027 10 log.go:181] (0xac184d0) (5) Data frame handling I0111 17:12:59.623241 10 log.go:181] (0xac184d0) (5) Data frame sent I0111 17:12:59.623379 10 log.go:181] (0xac182a0) Data frame received for 5 I0111 17:12:59.623535 10 log.go:181] (0xac182a0) Data frame received for 3 I0111 17:12:59.623734 10 log.go:181] (0xacfe070) (3) Data frame handling I0111 17:12:59.623994 10 log.go:181] (0xac184d0) (5) Data frame handling I0111 17:12:59.624267 10 log.go:181] (0xacfe070) (3) Data frame sent I0111 17:12:59.624533 10 log.go:181] (0xac182a0) Data frame received for 3 I0111 17:12:59.624814 10 log.go:181] (0xacfe070) (3) Data frame handling I0111 17:12:59.625185 10 log.go:181] (0xac184d0) (5) Data frame sent I0111 17:12:59.625385 10 log.go:181] (0xac182a0) Data frame received for 5 I0111 17:12:59.625492 10 log.go:181] (0xac184d0) (5) Data frame handling I0111 17:12:59.625609 10 log.go:181] (0xac184d0) (5) Data frame sent I0111 17:12:59.625725 10 log.go:181] (0xac182a0) Data frame received for 5 I0111 17:12:59.625826 10 log.go:181] (0xac182a0) Data frame received for 1 I0111 17:12:59.625986 10 log.go:181] (0xac18310) (1) Data frame handling I0111 17:12:59.626104 10 log.go:181] (0xac18310) (1) Data frame sent I0111 17:12:59.626241 10 log.go:181] (0xac182a0) (0xac18310) Stream removed, broadcasting: 1 I0111 17:12:59.626388 10 log.go:181] (0xac184d0) (5) Data frame handling I0111 17:12:59.626668 10 log.go:181] (0xac184d0) (5) Data frame sent I0111 17:12:59.626878 10 log.go:181] (0xac182a0) Data frame received for 5 I0111 17:12:59.626995 10 log.go:181] (0xac184d0) (5) Data frame handling I0111 17:12:59.627195 10 log.go:181] (0xac184d0) (5) Data frame sent I0111 17:12:59.627380 10 log.go:181] (0xac182a0) Data frame received for 5 I0111 17:12:59.627499 10 log.go:181] (0xac184d0) (5) Data frame handling I0111 17:12:59.627667 10 log.go:181] (0xac184d0) (5) Data frame sent I0111 17:12:59.627827 10 log.go:181] (0xac182a0) Data frame received for 5 I0111 17:12:59.628006 10 log.go:181] (0xac184d0) (5) Data frame handling I0111 17:12:59.628261 10 log.go:181] (0xac182a0) Go away received I0111 17:12:59.628464 10 log.go:181] (0xac182a0) (0xac18310) Stream removed, broadcasting: 1 I0111 17:12:59.628783 10 log.go:181] (0xac182a0) (0xacfe070) Stream removed, broadcasting: 3 I0111 17:12:59.629026 10 log.go:181] (0xac182a0) (0xac184d0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 11 17:12:59.629: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:12:59.629: INFO: >>> kubeConfig: /root/.kube/config I0111 17:12:59.729235 10 log.go:181] (0x965ff10) (0x965ff80) Create stream I0111 17:12:59.729376 10 log.go:181] (0x965ff10) (0x965ff80) Stream added, broadcasting: 1 I0111 17:12:59.734251 10 log.go:181] (0x965ff10) Reply frame received for 1 I0111 17:12:59.734420 10 log.go:181] (0x965ff10) (0xae56380) Create stream I0111 17:12:59.734485 10 log.go:181] (0x965ff10) (0xae56380) Stream added, broadcasting: 3 I0111 17:12:59.736346 10 log.go:181] (0x965ff10) Reply frame received for 3 I0111 17:12:59.736519 10 log.go:181] (0x965ff10) (0x7b7a150) Create stream I0111 17:12:59.736610 10 log.go:181] (0x965ff10) (0x7b7a150) Stream added, broadcasting: 5 I0111 17:12:59.738300 10 log.go:181] (0x965ff10) Reply frame received for 5 I0111 17:13:04.800447 10 log.go:181] (0x965ff10) Data frame received for 5 I0111 17:13:04.800696 10 log.go:181] (0x7b7a150) (5) Data frame handling I0111 17:13:04.801041 10 log.go:181] (0x7b7a150) (5) Data frame sent I0111 17:13:04.801613 10 log.go:181] (0x965ff10) Data frame received for 5 I0111 17:13:04.801793 10 log.go:181] (0x7b7a150) (5) Data frame handling I0111 17:13:04.802003 10 log.go:181] (0x965ff10) Data frame received for 3 I0111 17:13:04.802168 10 log.go:181] (0xae56380) (3) Data frame handling I0111 17:13:04.802528 10 log.go:181] (0x965ff10) Data frame received for 1 I0111 17:13:04.802733 10 log.go:181] (0x965ff80) (1) Data frame handling I0111 17:13:04.802887 10 log.go:181] (0x965ff80) (1) Data frame sent I0111 17:13:04.803076 10 log.go:181] (0x965ff10) (0x965ff80) Stream removed, broadcasting: 1 I0111 17:13:04.803296 10 log.go:181] (0x965ff10) Go away received I0111 17:13:04.803839 10 log.go:181] (0x965ff10) (0x965ff80) Stream removed, broadcasting: 1 I0111 17:13:04.804043 10 log.go:181] (0x965ff10) (0xae56380) Stream removed, broadcasting: 3 I0111 17:13:04.804213 10 log.go:181] (0x965ff10) (0x7b7a150) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 11 17:13:04.804: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:04.804: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:04.909877 10 log.go:181] (0xa99e4d0) (0xa99e540) Create stream I0111 17:13:04.910048 10 log.go:181] (0xa99e4d0) (0xa99e540) Stream added, broadcasting: 1 I0111 17:13:04.914218 10 log.go:181] (0xa99e4d0) Reply frame received for 1 I0111 17:13:04.914493 10 log.go:181] (0xa99e4d0) (0xa99e700) Create stream I0111 17:13:04.914616 10 log.go:181] (0xa99e4d0) (0xa99e700) Stream added, broadcasting: 3 I0111 17:13:04.916609 10 log.go:181] (0xa99e4d0) Reply frame received for 3 I0111 17:13:04.916941 10 log.go:181] (0xa99e4d0) (0xa99e8c0) Create stream I0111 17:13:04.917092 10 log.go:181] (0xa99e4d0) (0xa99e8c0) Stream added, broadcasting: 5 I0111 17:13:04.918556 10 log.go:181] (0xa99e4d0) Reply frame received for 5 I0111 17:13:05.016647 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.016898 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.017054 10 log.go:181] (0xa99e8c0) (5) Data frame sent I0111 17:13:05.017176 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.017290 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.017453 10 log.go:181] (0xa99e8c0) (5) Data frame sent I0111 17:13:05.017637 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.017870 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.018017 10 log.go:181] (0xa99e8c0) (5) Data frame sent I0111 17:13:05.018201 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.018414 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.018568 10 log.go:181] (0xa99e8c0) (5) Data frame sent I0111 17:13:05.018660 10 log.go:181] (0xa99e4d0) Data frame received for 3 I0111 17:13:05.018743 10 log.go:181] (0xa99e700) (3) Data frame handling I0111 17:13:05.018852 10 log.go:181] (0xa99e700) (3) Data frame sent I0111 17:13:05.018949 10 log.go:181] (0xa99e4d0) Data frame received for 3 I0111 17:13:05.019063 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.019231 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.019355 10 log.go:181] (0xa99e700) (3) Data frame handling I0111 17:13:05.019542 10 log.go:181] (0xa99e8c0) (5) Data frame sent I0111 17:13:05.019732 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.019905 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.020113 10 log.go:181] (0xa99e4d0) Data frame received for 1 I0111 17:13:05.020316 10 log.go:181] (0xa99e540) (1) Data frame handling I0111 17:13:05.020483 10 log.go:181] (0xa99e540) (1) Data frame sent I0111 17:13:05.020715 10 log.go:181] (0xa99e4d0) (0xa99e540) Stream removed, broadcasting: 1 I0111 17:13:05.021004 10 log.go:181] (0xa99e8c0) (5) Data frame sent I0111 17:13:05.021156 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.021281 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.021395 10 log.go:181] (0xa99e8c0) (5) Data frame sent I0111 17:13:05.021499 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.021597 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.021715 10 log.go:181] (0xa99e8c0) (5) Data frame sent I0111 17:13:05.021818 10 log.go:181] (0xa99e4d0) Data frame received for 5 I0111 17:13:05.021914 10 log.go:181] (0xa99e8c0) (5) Data frame handling I0111 17:13:05.022049 10 log.go:181] (0xa99e4d0) Go away received I0111 17:13:05.022163 10 log.go:181] (0xa99e4d0) (0xa99e540) Stream removed, broadcasting: 1 I0111 17:13:05.022305 10 log.go:181] (0xa99e4d0) (0xa99e700) Stream removed, broadcasting: 3 I0111 17:13:05.022536 10 log.go:181] (0xa99e4d0) (0xa99e8c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 11 17:13:05.022: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:05.022: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:05.131230 10 log.go:181] (0xc0fb0a0) (0xc0fb110) Create stream I0111 17:13:05.131408 10 log.go:181] (0xc0fb0a0) (0xc0fb110) Stream added, broadcasting: 1 I0111 17:13:05.135616 10 log.go:181] (0xc0fb0a0) Reply frame received for 1 I0111 17:13:05.135819 10 log.go:181] (0xc0fb0a0) (0xc0fb2d0) Create stream I0111 17:13:05.135899 10 log.go:181] (0xc0fb0a0) (0xc0fb2d0) Stream added, broadcasting: 3 I0111 17:13:05.137380 10 log.go:181] (0xc0fb0a0) Reply frame received for 3 I0111 17:13:05.137542 10 log.go:181] (0xc0fb0a0) (0x7b7a930) Create stream I0111 17:13:05.137623 10 log.go:181] (0xc0fb0a0) (0x7b7a930) Stream added, broadcasting: 5 I0111 17:13:05.138979 10 log.go:181] (0xc0fb0a0) Reply frame received for 5 I0111 17:13:05.196400 10 log.go:181] (0xc0fb0a0) Data frame received for 5 I0111 17:13:05.196669 10 log.go:181] (0xc0fb0a0) Data frame received for 3 I0111 17:13:05.197021 10 log.go:181] (0x7b7a930) (5) Data frame handling I0111 17:13:05.197132 10 log.go:181] (0x7b7a930) (5) Data frame sent I0111 17:13:05.197231 10 log.go:181] (0xc0fb2d0) (3) Data frame handling I0111 17:13:05.197478 10 log.go:181] (0xc0fb0a0) Data frame received for 5 I0111 17:13:05.197623 10 log.go:181] (0x7b7a930) (5) Data frame handling I0111 17:13:05.197782 10 log.go:181] (0xc0fb2d0) (3) Data frame sent I0111 17:13:05.197976 10 log.go:181] (0xc0fb0a0) Data frame received for 3 I0111 17:13:05.198131 10 log.go:181] (0xc0fb2d0) (3) Data frame handling I0111 17:13:05.198331 10 log.go:181] (0x7b7a930) (5) Data frame sent I0111 17:13:05.198464 10 log.go:181] (0xc0fb0a0) Data frame received for 5 I0111 17:13:05.198599 10 log.go:181] (0x7b7a930) (5) Data frame handling I0111 17:13:05.198743 10 log.go:181] (0x7b7a930) (5) Data frame sent I0111 17:13:05.198859 10 log.go:181] (0xc0fb0a0) Data frame received for 5 I0111 17:13:05.198978 10 log.go:181] (0x7b7a930) (5) Data frame handling I0111 17:13:05.199192 10 log.go:181] (0x7b7a930) (5) Data frame sent I0111 17:13:05.199353 10 log.go:181] (0xc0fb0a0) Data frame received for 5 I0111 17:13:05.199566 10 log.go:181] (0x7b7a930) (5) Data frame handling I0111 17:13:05.202320 10 log.go:181] (0xc0fb0a0) Data frame received for 1 I0111 17:13:05.202470 10 log.go:181] (0xc0fb110) (1) Data frame handling I0111 17:13:05.202670 10 log.go:181] (0xc0fb110) (1) Data frame sent I0111 17:13:05.202793 10 log.go:181] (0xc0fb0a0) (0xc0fb110) Stream removed, broadcasting: 1 I0111 17:13:05.202926 10 log.go:181] (0xc0fb0a0) Go away received I0111 17:13:05.203153 10 log.go:181] (0xc0fb0a0) (0xc0fb110) Stream removed, broadcasting: 1 I0111 17:13:05.203245 10 log.go:181] (0xc0fb0a0) (0xc0fb2d0) Stream removed, broadcasting: 3 I0111 17:13:05.203355 10 log.go:181] (0xc0fb0a0) (0x7b7a930) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 11 17:13:05.203: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:05.203: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:05.306064 10 log.go:181] (0xae56ee0) (0xae56f50) Create stream I0111 17:13:05.306173 10 log.go:181] (0xae56ee0) (0xae56f50) Stream added, broadcasting: 1 I0111 17:13:05.310055 10 log.go:181] (0xae56ee0) Reply frame received for 1 I0111 17:13:05.310161 10 log.go:181] (0xae56ee0) (0xa99eee0) Create stream I0111 17:13:05.310228 10 log.go:181] (0xae56ee0) (0xa99eee0) Stream added, broadcasting: 3 I0111 17:13:05.311295 10 log.go:181] (0xae56ee0) Reply frame received for 3 I0111 17:13:05.311439 10 log.go:181] (0xae56ee0) (0xae57180) Create stream I0111 17:13:05.311523 10 log.go:181] (0xae56ee0) (0xae57180) Stream added, broadcasting: 5 I0111 17:13:05.312709 10 log.go:181] (0xae56ee0) Reply frame received for 5 I0111 17:13:10.369336 10 log.go:181] (0xae56ee0) Data frame received for 5 I0111 17:13:10.369615 10 log.go:181] (0xae57180) (5) Data frame handling I0111 17:13:10.369786 10 log.go:181] (0xae56ee0) Data frame received for 3 I0111 17:13:10.370024 10 log.go:181] (0xa99eee0) (3) Data frame handling I0111 17:13:10.370224 10 log.go:181] (0xae57180) (5) Data frame sent I0111 17:13:10.370372 10 log.go:181] (0xae56ee0) Data frame received for 5 I0111 17:13:10.370537 10 log.go:181] (0xae57180) (5) Data frame handling I0111 17:13:10.371679 10 log.go:181] (0xae56ee0) Data frame received for 1 I0111 17:13:10.371917 10 log.go:181] (0xae56f50) (1) Data frame handling I0111 17:13:10.372128 10 log.go:181] (0xae56f50) (1) Data frame sent I0111 17:13:10.372353 10 log.go:181] (0xae56ee0) (0xae56f50) Stream removed, broadcasting: 1 I0111 17:13:10.372608 10 log.go:181] (0xae56ee0) Go away received I0111 17:13:10.373173 10 log.go:181] (0xae56ee0) (0xae56f50) Stream removed, broadcasting: 1 I0111 17:13:10.373342 10 log.go:181] (0xae56ee0) (0xa99eee0) Stream removed, broadcasting: 3 I0111 17:13:10.373469 10 log.go:181] (0xae56ee0) (0xae57180) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 11 17:13:10.373: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:10.373: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:10.480578 10 log.go:181] (0xae57730) (0xae57810) Create stream I0111 17:13:10.480769 10 log.go:181] (0xae57730) (0xae57810) Stream added, broadcasting: 1 I0111 17:13:10.485916 10 log.go:181] (0xae57730) Reply frame received for 1 I0111 17:13:10.486096 10 log.go:181] (0xae57730) (0xae579d0) Create stream I0111 17:13:10.486197 10 log.go:181] (0xae57730) (0xae579d0) Stream added, broadcasting: 3 I0111 17:13:10.488023 10 log.go:181] (0xae57730) Reply frame received for 3 I0111 17:13:10.488280 10 log.go:181] (0xae57730) (0xa99f0a0) Create stream I0111 17:13:10.488409 10 log.go:181] (0xae57730) (0xa99f0a0) Stream added, broadcasting: 5 I0111 17:13:10.490309 10 log.go:181] (0xae57730) Reply frame received for 5 I0111 17:13:10.562975 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.563288 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.563547 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.563734 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.563944 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.564150 10 log.go:181] (0xae57730) Data frame received for 3 I0111 17:13:10.564370 10 log.go:181] (0xae579d0) (3) Data frame handling I0111 17:13:10.564531 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.564728 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.564980 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.565206 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.565392 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.565539 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.565679 10 log.go:181] (0xae579d0) (3) Data frame sent I0111 17:13:10.565842 10 log.go:181] (0xae57730) Data frame received for 3 I0111 17:13:10.565968 10 log.go:181] (0xae579d0) (3) Data frame handling I0111 17:13:10.566100 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.566275 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.566434 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.566642 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.566785 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.566914 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.567049 10 log.go:181] (0xae57730) Data frame received for 1 I0111 17:13:10.567263 10 log.go:181] (0xae57810) (1) Data frame handling I0111 17:13:10.567494 10 log.go:181] (0xae57810) (1) Data frame sent I0111 17:13:10.567727 10 log.go:181] (0xae57730) (0xae57810) Stream removed, broadcasting: 1 I0111 17:13:10.567974 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.568163 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.568262 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.568382 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.568481 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.568573 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.568688 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.568785 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.568918 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.569041 10 log.go:181] (0xa99f0a0) (5) Data frame sent I0111 17:13:10.569179 10 log.go:181] (0xae57730) Data frame received for 5 I0111 17:13:10.569274 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:13:10.569421 10 log.go:181] (0xae57730) Go away received I0111 17:13:10.569570 10 log.go:181] (0xae57730) (0xae57810) Stream removed, broadcasting: 1 I0111 17:13:10.569718 10 log.go:181] (0xae57730) (0xae579d0) Stream removed, broadcasting: 3 I0111 17:13:10.569952 10 log.go:181] (0xae57730) (0xa99f0a0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 11 17:13:10.570: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:10.570: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:10.682304 10 log.go:181] (0xa99f810) (0xa99f880) Create stream I0111 17:13:10.682443 10 log.go:181] (0xa99f810) (0xa99f880) Stream added, broadcasting: 1 I0111 17:13:10.686818 10 log.go:181] (0xa99f810) Reply frame received for 1 I0111 17:13:10.687104 10 log.go:181] (0xa99f810) (0xacfe460) Create stream I0111 17:13:10.687251 10 log.go:181] (0xa99f810) (0xacfe460) Stream added, broadcasting: 3 I0111 17:13:10.689698 10 log.go:181] (0xa99f810) Reply frame received for 3 I0111 17:13:10.689873 10 log.go:181] (0xa99f810) (0xacfe620) Create stream I0111 17:13:10.689979 10 log.go:181] (0xa99f810) (0xacfe620) Stream added, broadcasting: 5 I0111 17:13:10.691433 10 log.go:181] (0xa99f810) Reply frame received for 5 I0111 17:13:10.756467 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.756679 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.757023 10 log.go:181] (0xacfe620) (5) Data frame sent I0111 17:13:10.757255 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.757468 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.757696 10 log.go:181] (0xa99f810) Data frame received for 3 I0111 17:13:10.757940 10 log.go:181] (0xacfe460) (3) Data frame handling I0111 17:13:10.758175 10 log.go:181] (0xacfe620) (5) Data frame sent I0111 17:13:10.758438 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.758652 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.758866 10 log.go:181] (0xacfe620) (5) Data frame sent I0111 17:13:10.759059 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.759195 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.759381 10 log.go:181] (0xacfe460) (3) Data frame sent I0111 17:13:10.759576 10 log.go:181] (0xa99f810) Data frame received for 3 I0111 17:13:10.759719 10 log.go:181] (0xa99f810) Data frame received for 1 I0111 17:13:10.759938 10 log.go:181] (0xa99f880) (1) Data frame handling I0111 17:13:10.760149 10 log.go:181] (0xacfe620) (5) Data frame sent I0111 17:13:10.760369 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.760500 10 log.go:181] (0xacfe460) (3) Data frame handling I0111 17:13:10.760719 10 log.go:181] (0xa99f880) (1) Data frame sent I0111 17:13:10.761021 10 log.go:181] (0xa99f810) (0xa99f880) Stream removed, broadcasting: 1 I0111 17:13:10.761278 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.761537 10 log.go:181] (0xacfe620) (5) Data frame sent I0111 17:13:10.761777 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.761895 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.762115 10 log.go:181] (0xacfe620) (5) Data frame sent I0111 17:13:10.762287 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.762424 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.762594 10 log.go:181] (0xacfe620) (5) Data frame sent I0111 17:13:10.762752 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.762878 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.763031 10 log.go:181] (0xacfe620) (5) Data frame sent I0111 17:13:10.763179 10 log.go:181] (0xa99f810) Data frame received for 5 I0111 17:13:10.763309 10 log.go:181] (0xacfe620) (5) Data frame handling I0111 17:13:10.763495 10 log.go:181] (0xa99f810) Go away received I0111 17:13:10.763716 10 log.go:181] (0xa99f810) (0xa99f880) Stream removed, broadcasting: 1 I0111 17:13:10.763922 10 log.go:181] (0xa99f810) (0xacfe460) Stream removed, broadcasting: 3 I0111 17:13:10.764253 10 log.go:181] (0xa99f810) (0xacfe620) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 11 17:13:10.764: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:10.764: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:10.864396 10 log.go:181] (0x7fde7e0) (0x7fde850) Create stream I0111 17:13:10.864521 10 log.go:181] (0x7fde7e0) (0x7fde850) Stream added, broadcasting: 1 I0111 17:13:10.868460 10 log.go:181] (0x7fde7e0) Reply frame received for 1 I0111 17:13:10.868684 10 log.go:181] (0x7fde7e0) (0x7fdea10) Create stream I0111 17:13:10.868828 10 log.go:181] (0x7fde7e0) (0x7fdea10) Stream added, broadcasting: 3 I0111 17:13:10.870808 10 log.go:181] (0x7fde7e0) Reply frame received for 3 I0111 17:13:10.870966 10 log.go:181] (0x7fde7e0) (0xa99fc70) Create stream I0111 17:13:10.871038 10 log.go:181] (0x7fde7e0) (0xa99fc70) Stream added, broadcasting: 5 I0111 17:13:10.872288 10 log.go:181] (0x7fde7e0) Reply frame received for 5 I0111 17:13:15.931252 10 log.go:181] (0x7fde7e0) Data frame received for 3 I0111 17:13:15.931438 10 log.go:181] (0x7fdea10) (3) Data frame handling I0111 17:13:15.931649 10 log.go:181] (0x7fde7e0) Data frame received for 5 I0111 17:13:15.931852 10 log.go:181] (0xa99fc70) (5) Data frame handling I0111 17:13:15.932039 10 log.go:181] (0xa99fc70) (5) Data frame sent I0111 17:13:15.932194 10 log.go:181] (0x7fde7e0) Data frame received for 5 I0111 17:13:15.932320 10 log.go:181] (0xa99fc70) (5) Data frame handling I0111 17:13:15.932780 10 log.go:181] (0x7fde7e0) Data frame received for 1 I0111 17:13:15.933110 10 log.go:181] (0x7fde850) (1) Data frame handling I0111 17:13:15.933278 10 log.go:181] (0x7fde850) (1) Data frame sent I0111 17:13:15.933445 10 log.go:181] (0x7fde7e0) (0x7fde850) Stream removed, broadcasting: 1 I0111 17:13:15.933601 10 log.go:181] (0x7fde7e0) Go away received I0111 17:13:15.934032 10 log.go:181] (0x7fde7e0) (0x7fde850) Stream removed, broadcasting: 1 I0111 17:13:15.934254 10 log.go:181] (0x7fde7e0) (0x7fdea10) Stream removed, broadcasting: 3 I0111 17:13:15.934416 10 log.go:181] (0x7fde7e0) (0xa99fc70) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 11 17:13:15.934: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:15.934: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:16.061407 10 log.go:181] (0x7b7bce0) (0x7b7bd50) Create stream I0111 17:13:16.061634 10 log.go:181] (0x7b7bce0) (0x7b7bd50) Stream added, broadcasting: 1 I0111 17:13:16.066810 10 log.go:181] (0x7b7bce0) Reply frame received for 1 I0111 17:13:16.067077 10 log.go:181] (0x7b7bce0) (0x7b7bf10) Create stream I0111 17:13:16.067300 10 log.go:181] (0x7b7bce0) (0x7b7bf10) Stream added, broadcasting: 3 I0111 17:13:16.069723 10 log.go:181] (0x7b7bce0) Reply frame received for 3 I0111 17:13:16.069992 10 log.go:181] (0x7b7bce0) (0xa99ff10) Create stream I0111 17:13:16.070202 10 log.go:181] (0x7b7bce0) (0xa99ff10) Stream added, broadcasting: 5 I0111 17:13:16.072092 10 log.go:181] (0x7b7bce0) Reply frame received for 5 I0111 17:13:16.160377 10 log.go:181] (0x7b7bce0) Data frame received for 5 I0111 17:13:16.160652 10 log.go:181] (0xa99ff10) (5) Data frame handling I0111 17:13:16.160798 10 log.go:181] (0xa99ff10) (5) Data frame sent I0111 17:13:16.161085 10 log.go:181] (0x7b7bce0) Data frame received for 3 I0111 17:13:16.161321 10 log.go:181] (0x7b7bf10) (3) Data frame handling I0111 17:13:16.161523 10 log.go:181] (0x7b7bf10) (3) Data frame sent I0111 17:13:16.161692 10 log.go:181] (0x7b7bce0) Data frame received for 3 I0111 17:13:16.161885 10 log.go:181] (0x7b7bf10) (3) Data frame handling I0111 17:13:16.162044 10 log.go:181] (0x7b7bce0) Data frame received for 5 I0111 17:13:16.162225 10 log.go:181] (0xa99ff10) (5) Data frame handling I0111 17:13:16.162402 10 log.go:181] (0xa99ff10) (5) Data frame sent I0111 17:13:16.162576 10 log.go:181] (0x7b7bce0) Data frame received for 5 I0111 17:13:16.162719 10 log.go:181] (0xa99ff10) (5) Data frame handling I0111 17:13:16.162891 10 log.go:181] (0xa99ff10) (5) Data frame sent I0111 17:13:16.163059 10 log.go:181] (0x7b7bce0) Data frame received for 5 I0111 17:13:16.163201 10 log.go:181] (0xa99ff10) (5) Data frame handling I0111 17:13:16.163377 10 log.go:181] (0xa99ff10) (5) Data frame sent I0111 17:13:16.163549 10 log.go:181] (0x7b7bce0) Data frame received for 5 I0111 17:13:16.163707 10 log.go:181] (0xa99ff10) (5) Data frame handling I0111 17:13:16.163898 10 log.go:181] (0x7b7bce0) Data frame received for 1 I0111 17:13:16.164112 10 log.go:181] (0x7b7bd50) (1) Data frame handling I0111 17:13:16.164323 10 log.go:181] (0xa99ff10) (5) Data frame sent I0111 17:13:16.164547 10 log.go:181] (0x7b7bce0) Data frame received for 5 I0111 17:13:16.164694 10 log.go:181] (0xa99ff10) (5) Data frame handling I0111 17:13:16.164798 10 log.go:181] (0x7b7bd50) (1) Data frame sent I0111 17:13:16.165055 10 log.go:181] (0x7b7bce0) (0x7b7bd50) Stream removed, broadcasting: 1 I0111 17:13:16.165210 10 log.go:181] (0xa99ff10) (5) Data frame sent I0111 17:13:16.165368 10 log.go:181] (0x7b7bce0) Data frame received for 5 I0111 17:13:16.165481 10 log.go:181] (0xa99ff10) (5) Data frame handling I0111 17:13:16.165622 10 log.go:181] (0xa99ff10) (5) Data frame sent I0111 17:13:16.165777 10 log.go:181] (0x7b7bce0) Data frame received for 5 I0111 17:13:16.165886 10 log.go:181] (0xa99ff10) (5) Data frame handling I0111 17:13:16.166073 10 log.go:181] (0x7b7bce0) Go away received I0111 17:13:16.166245 10 log.go:181] (0x7b7bce0) (0x7b7bd50) Stream removed, broadcasting: 1 I0111 17:13:16.166482 10 log.go:181] (0x7b7bce0) (0x7b7bf10) Stream removed, broadcasting: 3 I0111 17:13:16.166696 10 log.go:181] (0x7b7bce0) (0xa99ff10) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 11 17:13:16.166: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:16.167: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:16.274259 10 log.go:181] (0x80809a0) (0x8080a10) Create stream I0111 17:13:16.274408 10 log.go:181] (0x80809a0) (0x8080a10) Stream added, broadcasting: 1 I0111 17:13:16.278402 10 log.go:181] (0x80809a0) Reply frame received for 1 I0111 17:13:16.278601 10 log.go:181] (0x80809a0) (0xc0fbb90) Create stream I0111 17:13:16.278690 10 log.go:181] (0x80809a0) (0xc0fbb90) Stream added, broadcasting: 3 I0111 17:13:16.280048 10 log.go:181] (0x80809a0) Reply frame received for 3 I0111 17:13:16.280191 10 log.go:181] (0x80809a0) (0x8080bd0) Create stream I0111 17:13:16.280262 10 log.go:181] (0x80809a0) (0x8080bd0) Stream added, broadcasting: 5 I0111 17:13:16.289953 10 log.go:181] (0x80809a0) Reply frame received for 5 I0111 17:13:16.342894 10 log.go:181] (0x80809a0) Data frame received for 5 I0111 17:13:16.343061 10 log.go:181] (0x8080bd0) (5) Data frame handling I0111 17:13:16.343188 10 log.go:181] (0x8080bd0) (5) Data frame sent I0111 17:13:16.343299 10 log.go:181] (0x80809a0) Data frame received for 5 I0111 17:13:16.343413 10 log.go:181] (0x8080bd0) (5) Data frame handling I0111 17:13:16.343546 10 log.go:181] (0x8080bd0) (5) Data frame sent I0111 17:13:16.343636 10 log.go:181] (0x80809a0) Data frame received for 5 I0111 17:13:16.343745 10 log.go:181] (0x80809a0) Data frame received for 3 I0111 17:13:16.343870 10 log.go:181] (0xc0fbb90) (3) Data frame handling I0111 17:13:16.343969 10 log.go:181] (0xc0fbb90) (3) Data frame sent I0111 17:13:16.344058 10 log.go:181] (0x80809a0) Data frame received for 3 I0111 17:13:16.344175 10 log.go:181] (0x8080bd0) (5) Data frame handling I0111 17:13:16.344310 10 log.go:181] (0xc0fbb90) (3) Data frame handling I0111 17:13:16.344443 10 log.go:181] (0x8080bd0) (5) Data frame sent I0111 17:13:16.344521 10 log.go:181] (0x80809a0) Data frame received for 5 I0111 17:13:16.344582 10 log.go:181] (0x8080bd0) (5) Data frame handling I0111 17:13:16.344674 10 log.go:181] (0x8080bd0) (5) Data frame sent I0111 17:13:16.344753 10 log.go:181] (0x80809a0) Data frame received for 5 I0111 17:13:16.344997 10 log.go:181] (0x8080bd0) (5) Data frame handling I0111 17:13:16.345085 10 log.go:181] (0x8080bd0) (5) Data frame sent I0111 17:13:16.345176 10 log.go:181] (0x80809a0) Data frame received for 1 I0111 17:13:16.345338 10 log.go:181] (0x8080a10) (1) Data frame handling I0111 17:13:16.345491 10 log.go:181] (0x80809a0) Data frame received for 5 I0111 17:13:16.345573 10 log.go:181] (0x8080bd0) (5) Data frame handling I0111 17:13:16.345642 10 log.go:181] (0x8080bd0) (5) Data frame sent I0111 17:13:16.345729 10 log.go:181] (0x80809a0) Data frame received for 5 I0111 17:13:16.345805 10 log.go:181] (0x8080bd0) (5) Data frame handling I0111 17:13:16.345902 10 log.go:181] (0x8080a10) (1) Data frame sent I0111 17:13:16.345999 10 log.go:181] (0x80809a0) (0x8080a10) Stream removed, broadcasting: 1 I0111 17:13:16.346099 10 log.go:181] (0x80809a0) Go away received I0111 17:13:16.346342 10 log.go:181] (0x80809a0) (0x8080a10) Stream removed, broadcasting: 1 I0111 17:13:16.346426 10 log.go:181] (0x80809a0) (0xc0fbb90) Stream removed, broadcasting: 3 I0111 17:13:16.346508 10 log.go:181] (0x80809a0) (0x8080bd0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 11 17:13:16.346: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:16.346: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:16.451531 10 log.go:181] (0xacff0a0) (0xacff110) Create stream I0111 17:13:16.451690 10 log.go:181] (0xacff0a0) (0xacff110) Stream added, broadcasting: 1 I0111 17:13:16.455988 10 log.go:181] (0xacff0a0) Reply frame received for 1 I0111 17:13:16.456199 10 log.go:181] (0xacff0a0) (0x82ea850) Create stream I0111 17:13:16.456289 10 log.go:181] (0xacff0a0) (0x82ea850) Stream added, broadcasting: 3 I0111 17:13:16.457967 10 log.go:181] (0xacff0a0) Reply frame received for 3 I0111 17:13:16.458129 10 log.go:181] (0xacff0a0) (0xacff2d0) Create stream I0111 17:13:16.458221 10 log.go:181] (0xacff0a0) (0xacff2d0) Stream added, broadcasting: 5 I0111 17:13:16.461052 10 log.go:181] (0xacff0a0) Reply frame received for 5 I0111 17:13:21.506297 10 log.go:181] (0xacff0a0) Data frame received for 5 I0111 17:13:21.506610 10 log.go:181] (0xacff2d0) (5) Data frame handling I0111 17:13:21.506816 10 log.go:181] (0xacff0a0) Data frame received for 3 I0111 17:13:21.507051 10 log.go:181] (0x82ea850) (3) Data frame handling I0111 17:13:21.507220 10 log.go:181] (0xacff2d0) (5) Data frame sent I0111 17:13:21.507432 10 log.go:181] (0xacff0a0) Data frame received for 5 I0111 17:13:21.507621 10 log.go:181] (0xacff2d0) (5) Data frame handling I0111 17:13:21.508368 10 log.go:181] (0xacff0a0) Data frame received for 1 I0111 17:13:21.508554 10 log.go:181] (0xacff110) (1) Data frame handling I0111 17:13:21.508755 10 log.go:181] (0xacff110) (1) Data frame sent I0111 17:13:21.509051 10 log.go:181] (0xacff0a0) (0xacff110) Stream removed, broadcasting: 1 I0111 17:13:21.509234 10 log.go:181] (0xacff0a0) Go away received I0111 17:13:21.509770 10 log.go:181] (0xacff0a0) (0xacff110) Stream removed, broadcasting: 1 I0111 17:13:21.509954 10 log.go:181] (0xacff0a0) (0x82ea850) Stream removed, broadcasting: 3 I0111 17:13:21.510113 10 log.go:181] (0xacff0a0) (0xacff2d0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 11 17:13:21.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:21.510: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:21.618200 10 log.go:181] (0x7fdf730) (0x7fdf7a0) Create stream I0111 17:13:21.618328 10 log.go:181] (0x7fdf730) (0x7fdf7a0) Stream added, broadcasting: 1 I0111 17:13:21.625497 10 log.go:181] (0x7fdf730) Reply frame received for 1 I0111 17:13:21.625828 10 log.go:181] (0x7fdf730) (0x8081110) Create stream I0111 17:13:21.626035 10 log.go:181] (0x7fdf730) (0x8081110) Stream added, broadcasting: 3 I0111 17:13:21.627911 10 log.go:181] (0x7fdf730) Reply frame received for 3 I0111 17:13:21.628079 10 log.go:181] (0x7fdf730) (0x7fdf960) Create stream I0111 17:13:21.628171 10 log.go:181] (0x7fdf730) (0x7fdf960) Stream added, broadcasting: 5 I0111 17:13:21.629771 10 log.go:181] (0x7fdf730) Reply frame received for 5 I0111 17:13:21.728063 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.728326 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.728532 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.728709 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.728999 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.729207 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.729398 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.729588 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.729704 10 log.go:181] (0x7fdf730) Data frame received for 3 I0111 17:13:21.729873 10 log.go:181] (0x8081110) (3) Data frame handling I0111 17:13:21.730033 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.730273 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.730459 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.730640 10 log.go:181] (0x8081110) (3) Data frame sent I0111 17:13:21.730851 10 log.go:181] (0x7fdf730) Data frame received for 3 I0111 17:13:21.731133 10 log.go:181] (0x7fdf730) Data frame received for 1 I0111 17:13:21.731408 10 log.go:181] (0x7fdf7a0) (1) Data frame handling I0111 17:13:21.731575 10 log.go:181] (0x7fdf7a0) (1) Data frame sent I0111 17:13:21.731761 10 log.go:181] (0x7fdf730) (0x7fdf7a0) Stream removed, broadcasting: 1 I0111 17:13:21.731932 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.732068 10 log.go:181] (0x8081110) (3) Data frame handling I0111 17:13:21.732237 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.732338 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.732478 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.732584 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.732679 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.732800 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.733035 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.733135 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.733255 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.733360 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.733454 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.733578 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.733731 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.733834 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.733928 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.733988 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.734041 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.734109 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.734175 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.734241 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.734303 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.734361 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.734414 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.734490 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.734550 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.734609 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.734693 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.734756 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.734814 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.734884 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.734941 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.734996 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.735063 10 log.go:181] (0x7fdf960) (5) Data frame sent I0111 17:13:21.735120 10 log.go:181] (0x7fdf730) Data frame received for 5 I0111 17:13:21.735173 10 log.go:181] (0x7fdf960) (5) Data frame handling I0111 17:13:21.735257 10 log.go:181] (0x7fdf730) Go away received I0111 17:13:21.735340 10 log.go:181] (0x7fdf730) (0x7fdf7a0) Stream removed, broadcasting: 1 I0111 17:13:21.735439 10 log.go:181] (0x7fdf730) (0x8081110) Stream removed, broadcasting: 3 I0111 17:13:21.735519 10 log.go:181] (0x7fdf730) (0x7fdf960) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 11 17:13:21.735: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:21.735: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:21.837982 10 log.go:181] (0x82eafc0) (0x82eb030) Create stream I0111 17:13:21.838162 10 log.go:181] (0x82eafc0) (0x82eb030) Stream added, broadcasting: 1 I0111 17:13:21.842775 10 log.go:181] (0x82eafc0) Reply frame received for 1 I0111 17:13:21.842901 10 log.go:181] (0x82eafc0) (0x82eb1f0) Create stream I0111 17:13:21.842955 10 log.go:181] (0x82eafc0) (0x82eb1f0) Stream added, broadcasting: 3 I0111 17:13:21.844046 10 log.go:181] (0x82eafc0) Reply frame received for 3 I0111 17:13:21.844231 10 log.go:181] (0x82eafc0) (0x8081340) Create stream I0111 17:13:21.844331 10 log.go:181] (0x82eafc0) (0x8081340) Stream added, broadcasting: 5 I0111 17:13:21.845787 10 log.go:181] (0x82eafc0) Reply frame received for 5 I0111 17:13:21.915273 10 log.go:181] (0x82eafc0) Data frame received for 5 I0111 17:13:21.915446 10 log.go:181] (0x8081340) (5) Data frame handling I0111 17:13:21.915552 10 log.go:181] (0x8081340) (5) Data frame sent I0111 17:13:21.915648 10 log.go:181] (0x82eafc0) Data frame received for 5 I0111 17:13:21.915708 10 log.go:181] (0x8081340) (5) Data frame handling I0111 17:13:21.915788 10 log.go:181] (0x8081340) (5) Data frame sent I0111 17:13:21.915865 10 log.go:181] (0x82eafc0) Data frame received for 5 I0111 17:13:21.915932 10 log.go:181] (0x8081340) (5) Data frame handling I0111 17:13:21.916007 10 log.go:181] (0x8081340) (5) Data frame sent I0111 17:13:21.916093 10 log.go:181] (0x82eafc0) Data frame received for 5 I0111 17:13:21.916169 10 log.go:181] (0x8081340) (5) Data frame handling I0111 17:13:21.916248 10 log.go:181] (0x8081340) (5) Data frame sent I0111 17:13:21.916334 10 log.go:181] (0x82eafc0) Data frame received for 3 I0111 17:13:21.916420 10 log.go:181] (0x82eb1f0) (3) Data frame handling I0111 17:13:21.916505 10 log.go:181] (0x82eb1f0) (3) Data frame sent I0111 17:13:21.916572 10 log.go:181] (0x82eafc0) Data frame received for 3 I0111 17:13:21.916634 10 log.go:181] (0x82eb1f0) (3) Data frame handling I0111 17:13:21.916983 10 log.go:181] (0x82eafc0) Data frame received for 5 I0111 17:13:21.917174 10 log.go:181] (0x8081340) (5) Data frame handling I0111 17:13:21.918509 10 log.go:181] (0x82eafc0) Data frame received for 1 I0111 17:13:21.918638 10 log.go:181] (0x82eb030) (1) Data frame handling I0111 17:13:21.918772 10 log.go:181] (0x82eb030) (1) Data frame sent I0111 17:13:21.918908 10 log.go:181] (0x82eafc0) (0x82eb030) Stream removed, broadcasting: 1 I0111 17:13:21.919063 10 log.go:181] (0x82eafc0) Go away received I0111 17:13:21.919446 10 log.go:181] (0x82eafc0) (0x82eb030) Stream removed, broadcasting: 1 I0111 17:13:21.919607 10 log.go:181] (0x82eafc0) (0x82eb1f0) Stream removed, broadcasting: 3 I0111 17:13:21.919736 10 log.go:181] (0x82eafc0) (0x8081340) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 11 17:13:21.919: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8925 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:13:21.920: INFO: >>> kubeConfig: /root/.kube/config I0111 17:13:22.031607 10 log.go:181] (0xae992d0) (0xae99340) Create stream I0111 17:13:22.031745 10 log.go:181] (0xae992d0) (0xae99340) Stream added, broadcasting: 1 I0111 17:13:22.035250 10 log.go:181] (0xae992d0) Reply frame received for 1 I0111 17:13:22.035404 10 log.go:181] (0xae992d0) (0xae99500) Create stream I0111 17:13:22.035482 10 log.go:181] (0xae992d0) (0xae99500) Stream added, broadcasting: 3 I0111 17:13:22.036683 10 log.go:181] (0xae992d0) Reply frame received for 3 I0111 17:13:22.036818 10 log.go:181] (0xae992d0) (0x8081570) Create stream I0111 17:13:22.036940 10 log.go:181] (0xae992d0) (0x8081570) Stream added, broadcasting: 5 I0111 17:13:22.038253 10 log.go:181] (0xae992d0) Reply frame received for 5 I0111 17:13:27.094677 10 log.go:181] (0xae992d0) Data frame received for 3 I0111 17:13:27.094909 10 log.go:181] (0xae99500) (3) Data frame handling I0111 17:13:27.095064 10 log.go:181] (0xae992d0) Data frame received for 5 I0111 17:13:27.095199 10 log.go:181] (0x8081570) (5) Data frame handling I0111 17:13:27.095315 10 log.go:181] (0x8081570) (5) Data frame sent I0111 17:13:27.095403 10 log.go:181] (0xae992d0) Data frame received for 5 I0111 17:13:27.095498 10 log.go:181] (0x8081570) (5) Data frame handling I0111 17:13:27.097336 10 log.go:181] (0xae992d0) Data frame received for 1 I0111 17:13:27.097503 10 log.go:181] (0xae99340) (1) Data frame handling I0111 17:13:27.097649 10 log.go:181] (0xae99340) (1) Data frame sent I0111 17:13:27.097892 10 log.go:181] (0xae992d0) (0xae99340) Stream removed, broadcasting: 1 I0111 17:13:27.098093 10 log.go:181] (0xae992d0) Go away received I0111 17:13:27.098609 10 log.go:181] (0xae992d0) (0xae99340) Stream removed, broadcasting: 1 I0111 17:13:27.098798 10 log.go:181] (0xae992d0) (0xae99500) Stream removed, broadcasting: 3 I0111 17:13:27.098969 10 log.go:181] (0xae992d0) (0x8081570) Stream removed, broadcasting: 5 STEP: removing the label kubernetes.io/e2e-45f1906e-0c97-4b15-99b4-2197e92a3a3c off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-45f1906e-0c97-4b15-99b4-2197e92a3a3c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:13:27.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8925" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:48.467 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":309,"completed":142,"skipped":2529,"failed":0} [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:13:27.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 11 17:13:27.295: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8524 d453ea28-2743-4d85-9628-f838224abd17 201740 0 2021-01-11 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-11 17:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 17:13:27.296: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8524 d453ea28-2743-4d85-9628-f838224abd17 201740 0 2021-01-11 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-11 17:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 11 17:13:37.313: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8524 d453ea28-2743-4d85-9628-f838224abd17 201794 0 2021-01-11 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-11 17:13:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 17:13:37.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8524 d453ea28-2743-4d85-9628-f838224abd17 201794 0 2021-01-11 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-11 17:13:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 11 17:13:47.334: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8524 d453ea28-2743-4d85-9628-f838224abd17 201814 0 2021-01-11 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-11 17:13:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 17:13:47.336: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8524 d453ea28-2743-4d85-9628-f838224abd17 201814 0 2021-01-11 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-11 17:13:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 11 17:13:57.350: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8524 d453ea28-2743-4d85-9628-f838224abd17 201837 0 2021-01-11 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-11 17:13:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 17:13:57.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8524 d453ea28-2743-4d85-9628-f838224abd17 201837 0 2021-01-11 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-11 17:13:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 11 17:14:07.367: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8524 d4af7293-a6e3-4fba-b4f0-2ed12539e281 201858 0 2021-01-11 17:14:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-11 17:14:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 17:14:07.369: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8524 d4af7293-a6e3-4fba-b4f0-2ed12539e281 201858 0 2021-01-11 17:14:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-11 17:14:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 11 17:14:17.402: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8524 d4af7293-a6e3-4fba-b4f0-2ed12539e281 201878 0 2021-01-11 17:14:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-11 17:14:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 17:14:17.408: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8524 d4af7293-a6e3-4fba-b4f0-2ed12539e281 201878 0 2021-01-11 17:14:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-11 17:14:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:14:27.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8524" for this suite. • [SLOW TEST:60.258 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":309,"completed":143,"skipped":2529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:14:27.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:14:27.546: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 17:14:39.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2260 --namespace=crd-publish-openapi-2260 create -f -' Jan 11 17:14:46.123: INFO: stderr: "" Jan 11 17:14:46.123: INFO: stdout: "e2e-test-crd-publish-openapi-4526-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 11 17:14:46.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2260 --namespace=crd-publish-openapi-2260 delete e2e-test-crd-publish-openapi-4526-crds test-cr' Jan 11 17:14:47.276: INFO: stderr: "" Jan 11 17:14:47.276: INFO: stdout: "e2e-test-crd-publish-openapi-4526-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 11 17:14:47.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2260 --namespace=crd-publish-openapi-2260 apply -f -' Jan 11 17:14:49.542: INFO: stderr: "" Jan 11 17:14:49.542: INFO: stdout: "e2e-test-crd-publish-openapi-4526-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 11 17:14:49.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2260 --namespace=crd-publish-openapi-2260 delete e2e-test-crd-publish-openapi-4526-crds test-cr' Jan 11 17:14:50.772: INFO: stderr: "" Jan 11 17:14:50.772: INFO: stdout: "e2e-test-crd-publish-openapi-4526-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 11 17:14:50.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2260 explain e2e-test-crd-publish-openapi-4526-crds' Jan 11 17:14:53.616: INFO: stderr: "" Jan 11 17:14:53.616: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4526-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:15:16.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2260" for this suite. • [SLOW TEST:48.899 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":309,"completed":144,"skipped":2555,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:15:16.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating all guestbook components Jan 11 17:15:16.465: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 11 17:15:16.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 create -f -' Jan 11 17:15:18.723: INFO: stderr: "" Jan 11 17:15:18.723: INFO: stdout: "service/agnhost-replica created\n" Jan 11 17:15:18.724: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 11 17:15:18.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 create -f -' Jan 11 17:15:21.440: INFO: stderr: "" Jan 11 17:15:21.441: INFO: stdout: "service/agnhost-primary created\n" Jan 11 17:15:21.442: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 11 17:15:21.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 create -f -' Jan 11 17:15:24.645: INFO: stderr: "" Jan 11 17:15:24.645: INFO: stdout: "service/frontend created\n" Jan 11 17:15:24.646: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 11 17:15:24.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 create -f -' Jan 11 17:15:27.967: INFO: stderr: "" Jan 11 17:15:27.967: INFO: stdout: "deployment.apps/frontend created\n" Jan 11 17:15:27.968: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 11 17:15:27.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 create -f -' Jan 11 17:15:32.326: INFO: stderr: "" Jan 11 17:15:32.326: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 11 17:15:32.328: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 11 17:15:32.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 create -f -' Jan 11 17:15:35.101: INFO: stderr: "" Jan 11 17:15:35.101: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jan 11 17:15:35.101: INFO: Waiting for all frontend pods to be Running. Jan 11 17:15:35.152: INFO: Waiting for frontend to serve content. Jan 11 17:15:36.255: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 11 17:15:41.267: INFO: Trying to add a new entry to the guestbook. Jan 11 17:15:41.280: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 11 17:15:41.290: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 delete --grace-period=0 --force -f -' Jan 11 17:15:42.506: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 17:15:42.506: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jan 11 17:15:42.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 delete --grace-period=0 --force -f -' Jan 11 17:15:43.831: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 17:15:43.832: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jan 11 17:15:43.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 delete --grace-period=0 --force -f -' Jan 11 17:15:45.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 17:15:45.110: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 11 17:15:45.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 delete --grace-period=0 --force -f -' Jan 11 17:15:46.287: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 17:15:46.287: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 11 17:15:46.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 delete --grace-period=0 --force -f -' Jan 11 17:15:47.490: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 17:15:47.491: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jan 11 17:15:47.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7021 delete --grace-period=0 --force -f -' Jan 11 17:15:48.816: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 17:15:48.817: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:15:48.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7021" for this suite. • [SLOW TEST:33.037 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":309,"completed":145,"skipped":2558,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:15:49.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 17:15:58.199: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 17:16:00.323: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982158, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982158, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982158, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982158, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 17:16:03.445: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:16:03.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9410" for this suite. STEP: Destroying namespace "webhook-9410-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.304 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":309,"completed":146,"skipped":2559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:16:03.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-2691/configmap-test-5ff10c92-c006-424e-91ff-2b71331b8889 STEP: Creating a pod to test consume configMaps Jan 11 17:16:03.836: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7c83ab4-638f-4112-acab-9d203d1f7cd8" in namespace "configmap-2691" to be "Succeeded or Failed" Jan 11 17:16:03.865: INFO: Pod "pod-configmaps-a7c83ab4-638f-4112-acab-9d203d1f7cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.152513ms Jan 11 17:16:05.871: INFO: Pod "pod-configmaps-a7c83ab4-638f-4112-acab-9d203d1f7cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035290562s Jan 11 17:16:07.880: INFO: Pod "pod-configmaps-a7c83ab4-638f-4112-acab-9d203d1f7cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044082759s STEP: Saw pod success Jan 11 17:16:07.881: INFO: Pod "pod-configmaps-a7c83ab4-638f-4112-acab-9d203d1f7cd8" satisfied condition "Succeeded or Failed" Jan 11 17:16:07.885: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-a7c83ab4-638f-4112-acab-9d203d1f7cd8 container env-test: STEP: delete the pod Jan 11 17:16:07.932: INFO: Waiting for pod pod-configmaps-a7c83ab4-638f-4112-acab-9d203d1f7cd8 to disappear Jan 11 17:16:07.936: INFO: Pod pod-configmaps-a7c83ab4-638f-4112-acab-9d203d1f7cd8 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:16:07.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2691" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":309,"completed":147,"skipped":2582,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:16:07.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 11 17:16:16.115: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 17:16:16.137: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 17:16:18.138: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 17:16:18.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 17:16:20.138: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 17:16:20.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 17:16:22.138: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 17:16:22.145: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 17:16:24.138: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 17:16:24.146: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 17:16:26.138: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 17:16:26.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 17:16:28.138: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 17:16:28.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 17:16:30.138: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 17:16:30.151: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:16:30.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2008" for this suite. • [SLOW TEST:22.220 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":309,"completed":148,"skipped":2582,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:16:30.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token Jan 11 17:16:30.901: INFO: created pod pod-service-account-defaultsa Jan 11 17:16:30.902: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 11 17:16:30.929: INFO: created pod pod-service-account-mountsa Jan 11 17:16:30.929: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 11 17:16:30.946: INFO: created pod pod-service-account-nomountsa Jan 11 17:16:30.947: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 11 17:16:31.007: INFO: created pod pod-service-account-defaultsa-mountspec Jan 11 17:16:31.007: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 11 17:16:31.044: INFO: created pod pod-service-account-mountsa-mountspec Jan 11 17:16:31.044: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 11 17:16:31.071: INFO: created pod pod-service-account-nomountsa-mountspec Jan 11 17:16:31.071: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 11 17:16:31.092: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 11 17:16:31.092: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 11 17:16:31.128: INFO: created pod pod-service-account-mountsa-nomountspec Jan 11 17:16:31.128: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 11 17:16:31.165: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 11 17:16:31.166: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:16:31.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4208" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":309,"completed":149,"skipped":2587,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:16:31.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-4383 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 17:16:31.369: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 17:16:31.429: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 17:16:33.686: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 17:16:35.897: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 17:16:37.549: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 17:16:39.637: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 17:16:41.582: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 17:16:43.440: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:16:45.595: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:16:47.438: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:16:49.438: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:16:51.438: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:16:53.437: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:16:55.437: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:16:57.437: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:16:59.436: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 11 17:16:59.446: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 11 17:17:03.521: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 11 17:17:03.522: INFO: Going to poll 10.244.2.41 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jan 11 17:17:03.527: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.41 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4383 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:17:03.527: INFO: >>> kubeConfig: /root/.kube/config I0111 17:17:03.639752 10 log.go:181] (0xba02460) (0xba024d0) Create stream I0111 17:17:03.639944 10 log.go:181] (0xba02460) (0xba024d0) Stream added, broadcasting: 1 I0111 17:17:03.644295 10 log.go:181] (0xba02460) Reply frame received for 1 I0111 17:17:03.644490 10 log.go:181] (0xba02460) (0xbe1e070) Create stream I0111 17:17:03.644577 10 log.go:181] (0xba02460) (0xbe1e070) Stream added, broadcasting: 3 I0111 17:17:03.646254 10 log.go:181] (0xba02460) Reply frame received for 3 I0111 17:17:03.646427 10 log.go:181] (0xba02460) (0xba02690) Create stream I0111 17:17:03.646543 10 log.go:181] (0xba02460) (0xba02690) Stream added, broadcasting: 5 I0111 17:17:03.647914 10 log.go:181] (0xba02460) Reply frame received for 5 I0111 17:17:04.723416 10 log.go:181] (0xba02460) Data frame received for 3 I0111 17:17:04.723725 10 log.go:181] (0xbe1e070) (3) Data frame handling I0111 17:17:04.724074 10 log.go:181] (0xbe1e070) (3) Data frame sent I0111 17:17:04.724286 10 log.go:181] (0xba02460) Data frame received for 3 I0111 17:17:04.724526 10 log.go:181] (0xbe1e070) (3) Data frame handling I0111 17:17:04.724906 10 log.go:181] (0xba02460) Data frame received for 5 I0111 17:17:04.725096 10 log.go:181] (0xba02690) (5) Data frame handling I0111 17:17:04.726524 10 log.go:181] (0xba02460) Data frame received for 1 I0111 17:17:04.726653 10 log.go:181] (0xba024d0) (1) Data frame handling I0111 17:17:04.726779 10 log.go:181] (0xba024d0) (1) Data frame sent I0111 17:17:04.726881 10 log.go:181] (0xba02460) (0xba024d0) Stream removed, broadcasting: 1 I0111 17:17:04.727008 10 log.go:181] (0xba02460) Go away received I0111 17:17:04.727333 10 log.go:181] (0xba02460) (0xba024d0) Stream removed, broadcasting: 1 I0111 17:17:04.727441 10 log.go:181] (0xba02460) (0xbe1e070) Stream removed, broadcasting: 3 I0111 17:17:04.727520 10 log.go:181] (0xba02460) (0xba02690) Stream removed, broadcasting: 5 Jan 11 17:17:04.728: INFO: Found all 1 expected endpoints: [netserver-0] Jan 11 17:17:04.728: INFO: Going to poll 10.244.1.65 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jan 11 17:17:04.734: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.65 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4383 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:17:04.734: INFO: >>> kubeConfig: /root/.kube/config I0111 17:17:04.842052 10 log.go:181] (0xbe1e620) (0xbe1e700) Create stream I0111 17:17:04.842257 10 log.go:181] (0xbe1e620) (0xbe1e700) Stream added, broadcasting: 1 I0111 17:17:04.847922 10 log.go:181] (0xbe1e620) Reply frame received for 1 I0111 17:17:04.848217 10 log.go:181] (0xbe1e620) (0xbe1e8c0) Create stream I0111 17:17:04.848373 10 log.go:181] (0xbe1e620) (0xbe1e8c0) Stream added, broadcasting: 3 I0111 17:17:04.850720 10 log.go:181] (0xbe1e620) Reply frame received for 3 I0111 17:17:04.850844 10 log.go:181] (0xbe1e620) (0xbe1ea80) Create stream I0111 17:17:04.850908 10 log.go:181] (0xbe1e620) (0xbe1ea80) Stream added, broadcasting: 5 I0111 17:17:04.852345 10 log.go:181] (0xbe1e620) Reply frame received for 5 I0111 17:17:05.942719 10 log.go:181] (0xbe1e620) Data frame received for 5 I0111 17:17:05.942925 10 log.go:181] (0xbe1ea80) (5) Data frame handling I0111 17:17:05.943066 10 log.go:181] (0xbe1e620) Data frame received for 3 I0111 17:17:05.943172 10 log.go:181] (0xbe1e8c0) (3) Data frame handling I0111 17:17:05.943301 10 log.go:181] (0xbe1e8c0) (3) Data frame sent I0111 17:17:05.943398 10 log.go:181] (0xbe1e620) Data frame received for 3 I0111 17:17:05.943486 10 log.go:181] (0xbe1e8c0) (3) Data frame handling I0111 17:17:05.944669 10 log.go:181] (0xbe1e620) Data frame received for 1 I0111 17:17:05.945005 10 log.go:181] (0xbe1e700) (1) Data frame handling I0111 17:17:05.945249 10 log.go:181] (0xbe1e700) (1) Data frame sent I0111 17:17:05.945421 10 log.go:181] (0xbe1e620) (0xbe1e700) Stream removed, broadcasting: 1 I0111 17:17:05.945642 10 log.go:181] (0xbe1e620) Go away received I0111 17:17:05.946277 10 log.go:181] (0xbe1e620) (0xbe1e700) Stream removed, broadcasting: 1 I0111 17:17:05.946502 10 log.go:181] (0xbe1e620) (0xbe1e8c0) Stream removed, broadcasting: 3 I0111 17:17:05.946655 10 log.go:181] (0xbe1e620) (0xbe1ea80) Stream removed, broadcasting: 5 Jan 11 17:17:05.946: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:17:05.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4383" for this suite. • [SLOW TEST:34.697 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":150,"skipped":2592,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:17:05.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-79f50778-6242-4cbd-8394-6d765a5e2797 STEP: Creating a pod to test consume secrets Jan 11 17:17:06.074: INFO: Waiting up to 5m0s for pod "pod-secrets-e4d1a53d-188e-4dff-9252-1a9c581a743d" in namespace "secrets-7449" to be "Succeeded or Failed" Jan 11 17:17:06.105: INFO: Pod "pod-secrets-e4d1a53d-188e-4dff-9252-1a9c581a743d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.173214ms Jan 11 17:17:08.113: INFO: Pod "pod-secrets-e4d1a53d-188e-4dff-9252-1a9c581a743d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038924601s Jan 11 17:17:10.119: INFO: Pod "pod-secrets-e4d1a53d-188e-4dff-9252-1a9c581a743d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044791281s STEP: Saw pod success Jan 11 17:17:10.119: INFO: Pod "pod-secrets-e4d1a53d-188e-4dff-9252-1a9c581a743d" satisfied condition "Succeeded or Failed" Jan 11 17:17:10.124: INFO: Trying to get logs from node leguer-worker pod pod-secrets-e4d1a53d-188e-4dff-9252-1a9c581a743d container secret-volume-test: STEP: delete the pod Jan 11 17:17:10.149: INFO: Waiting for pod pod-secrets-e4d1a53d-188e-4dff-9252-1a9c581a743d to disappear Jan 11 17:17:10.193: INFO: Pod pod-secrets-e4d1a53d-188e-4dff-9252-1a9c581a743d no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:17:10.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7449" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":151,"skipped":2593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:17:10.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller Jan 11 17:17:10.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 create -f -' Jan 11 17:17:13.377: INFO: stderr: "" Jan 11 17:17:13.377: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 17:17:13.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 17:17:14.560: INFO: stderr: "" Jan 11 17:17:14.560: INFO: stdout: "update-demo-nautilus-dzwvt update-demo-nautilus-wvrnc " Jan 11 17:17:14.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get pods update-demo-nautilus-dzwvt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 17:17:15.697: INFO: stderr: "" Jan 11 17:17:15.697: INFO: stdout: "" Jan 11 17:17:15.698: INFO: update-demo-nautilus-dzwvt is created but not running Jan 11 17:17:20.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 17:17:21.965: INFO: stderr: "" Jan 11 17:17:21.965: INFO: stdout: "update-demo-nautilus-dzwvt update-demo-nautilus-wvrnc " Jan 11 17:17:21.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get pods update-demo-nautilus-dzwvt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 17:17:23.164: INFO: stderr: "" Jan 11 17:17:23.164: INFO: stdout: "true" Jan 11 17:17:23.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get pods update-demo-nautilus-dzwvt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 17:17:24.417: INFO: stderr: "" Jan 11 17:17:24.417: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 17:17:24.417: INFO: validating pod update-demo-nautilus-dzwvt Jan 11 17:17:24.433: INFO: got data: { "image": "nautilus.jpg" } Jan 11 17:17:24.434: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 17:17:24.434: INFO: update-demo-nautilus-dzwvt is verified up and running Jan 11 17:17:24.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get pods update-demo-nautilus-wvrnc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 17:17:25.600: INFO: stderr: "" Jan 11 17:17:25.600: INFO: stdout: "true" Jan 11 17:17:25.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get pods update-demo-nautilus-wvrnc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 17:17:26.800: INFO: stderr: "" Jan 11 17:17:26.800: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 17:17:26.801: INFO: validating pod update-demo-nautilus-wvrnc Jan 11 17:17:26.807: INFO: got data: { "image": "nautilus.jpg" } Jan 11 17:17:26.807: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 17:17:26.808: INFO: update-demo-nautilus-wvrnc is verified up and running STEP: using delete to clean up resources Jan 11 17:17:26.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 delete --grace-period=0 --force -f -' Jan 11 17:17:28.102: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 17:17:28.102: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 11 17:17:28.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get rc,svc -l name=update-demo --no-headers' Jan 11 17:17:29.385: INFO: stderr: "No resources found in kubectl-5286 namespace.\n" Jan 11 17:17:29.386: INFO: stdout: "" Jan 11 17:17:29.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5286 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 17:17:30.600: INFO: stderr: "" Jan 11 17:17:30.600: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:17:30.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5286" for this suite. • [SLOW TEST:20.401 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":309,"completed":152,"skipped":2625,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:17:30.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod with failed condition STEP: updating the pod Jan 11 17:19:31.384: INFO: Successfully updated pod "var-expansion-2870fe0a-e467-4a09-9512-27ba4665b564" STEP: waiting for pod running STEP: deleting the pod gracefully Jan 11 17:19:33.438: INFO: Deleting pod "var-expansion-2870fe0a-e467-4a09-9512-27ba4665b564" in namespace "var-expansion-8719" Jan 11 17:19:33.446: INFO: Wait up to 5m0s for pod "var-expansion-2870fe0a-e467-4a09-9512-27ba4665b564" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:20:11.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8719" for this suite. • [SLOW TEST:160.886 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":309,"completed":153,"skipped":2631,"failed":0} [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:20:11.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 11 17:20:11.705: INFO: starting watch STEP: patching STEP: updating Jan 11 17:20:11.730: INFO: waiting for watch events with expected annotations Jan 11 17:20:11.731: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:20:11.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-9782" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":309,"completed":154,"skipped":2631,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:20:11.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-a7799c11-598b-4aa5-a9eb-fe19ff6364c2 STEP: Creating a pod to test consume secrets Jan 11 17:20:11.959: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d6aab90-25af-4a0c-aed1-1d31acf21109" in namespace "projected-5903" to be "Succeeded or Failed" Jan 11 17:20:11.978: INFO: Pod "pod-projected-secrets-9d6aab90-25af-4a0c-aed1-1d31acf21109": Phase="Pending", Reason="", readiness=false. Elapsed: 18.962642ms Jan 11 17:20:13.987: INFO: Pod "pod-projected-secrets-9d6aab90-25af-4a0c-aed1-1d31acf21109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027720425s Jan 11 17:20:15.995: INFO: Pod "pod-projected-secrets-9d6aab90-25af-4a0c-aed1-1d31acf21109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036202029s STEP: Saw pod success Jan 11 17:20:15.995: INFO: Pod "pod-projected-secrets-9d6aab90-25af-4a0c-aed1-1d31acf21109" satisfied condition "Succeeded or Failed" Jan 11 17:20:16.001: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-9d6aab90-25af-4a0c-aed1-1d31acf21109 container projected-secret-volume-test: STEP: delete the pod Jan 11 17:20:16.062: INFO: Waiting for pod pod-projected-secrets-9d6aab90-25af-4a0c-aed1-1d31acf21109 to disappear Jan 11 17:20:16.083: INFO: Pod pod-projected-secrets-9d6aab90-25af-4a0c-aed1-1d31acf21109 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:20:16.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5903" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":155,"skipped":2631,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:20:16.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:20:16.218: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-17a93019-a59b-44e0-b630-5b0d5956d555" in namespace "security-context-test-7864" to be "Succeeded or Failed" Jan 11 17:20:16.228: INFO: Pod "alpine-nnp-false-17a93019-a59b-44e0-b630-5b0d5956d555": Phase="Pending", Reason="", readiness=false. Elapsed: 9.766271ms Jan 11 17:20:18.237: INFO: Pod "alpine-nnp-false-17a93019-a59b-44e0-b630-5b0d5956d555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019348713s Jan 11 17:20:20.244: INFO: Pod "alpine-nnp-false-17a93019-a59b-44e0-b630-5b0d5956d555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025768348s Jan 11 17:20:20.244: INFO: Pod "alpine-nnp-false-17a93019-a59b-44e0-b630-5b0d5956d555" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:20:20.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7864" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":156,"skipped":2639,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:20:20.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 17:20:20.524: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40269556-64a2-4d3e-9896-e7cc5b737f33" in namespace "downward-api-5638" to be "Succeeded or Failed" Jan 11 17:20:20.564: INFO: Pod "downwardapi-volume-40269556-64a2-4d3e-9896-e7cc5b737f33": Phase="Pending", Reason="", readiness=false. Elapsed: 39.204443ms Jan 11 17:20:22.570: INFO: Pod "downwardapi-volume-40269556-64a2-4d3e-9896-e7cc5b737f33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04556767s Jan 11 17:20:24.577: INFO: Pod "downwardapi-volume-40269556-64a2-4d3e-9896-e7cc5b737f33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052828988s STEP: Saw pod success Jan 11 17:20:24.578: INFO: Pod "downwardapi-volume-40269556-64a2-4d3e-9896-e7cc5b737f33" satisfied condition "Succeeded or Failed" Jan 11 17:20:24.582: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-40269556-64a2-4d3e-9896-e7cc5b737f33 container client-container: STEP: delete the pod Jan 11 17:20:24.645: INFO: Waiting for pod downwardapi-volume-40269556-64a2-4d3e-9896-e7cc5b737f33 to disappear Jan 11 17:20:24.652: INFO: Pod downwardapi-volume-40269556-64a2-4d3e-9896-e7cc5b737f33 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:20:24.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5638" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":309,"completed":157,"skipped":2645,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:20:24.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 17:20:24.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab18a897-5f55-4d42-a5b1-98a144412ffe" in namespace "projected-7732" to be "Succeeded or Failed" Jan 11 17:20:24.827: INFO: Pod "downwardapi-volume-ab18a897-5f55-4d42-a5b1-98a144412ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 22.667456ms Jan 11 17:20:26.848: INFO: Pod "downwardapi-volume-ab18a897-5f55-4d42-a5b1-98a144412ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043694103s Jan 11 17:20:28.857: INFO: Pod "downwardapi-volume-ab18a897-5f55-4d42-a5b1-98a144412ffe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052142856s STEP: Saw pod success Jan 11 17:20:28.857: INFO: Pod "downwardapi-volume-ab18a897-5f55-4d42-a5b1-98a144412ffe" satisfied condition "Succeeded or Failed" Jan 11 17:20:28.863: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-ab18a897-5f55-4d42-a5b1-98a144412ffe container client-container: STEP: delete the pod Jan 11 17:20:28.934: INFO: Waiting for pod downwardapi-volume-ab18a897-5f55-4d42-a5b1-98a144412ffe to disappear Jan 11 17:20:28.939: INFO: Pod downwardapi-volume-ab18a897-5f55-4d42-a5b1-98a144412ffe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:20:28.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7732" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":309,"completed":158,"skipped":2645,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:20:28.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 11 17:20:32.170: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:20:32.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1011" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":159,"skipped":2656,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:20:32.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 17:20:32.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04418aff-f66f-43b6-861c-2db7a3736bc2" in namespace "downward-api-611" to be "Succeeded or Failed" Jan 11 17:20:32.554: INFO: Pod "downwardapi-volume-04418aff-f66f-43b6-861c-2db7a3736bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.282499ms Jan 11 17:20:34.682: INFO: Pod "downwardapi-volume-04418aff-f66f-43b6-861c-2db7a3736bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154780353s Jan 11 17:20:36.691: INFO: Pod "downwardapi-volume-04418aff-f66f-43b6-861c-2db7a3736bc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163469593s STEP: Saw pod success Jan 11 17:20:36.691: INFO: Pod "downwardapi-volume-04418aff-f66f-43b6-861c-2db7a3736bc2" satisfied condition "Succeeded or Failed" Jan 11 17:20:36.695: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-04418aff-f66f-43b6-861c-2db7a3736bc2 container client-container: STEP: delete the pod Jan 11 17:20:36.737: INFO: Waiting for pod downwardapi-volume-04418aff-f66f-43b6-861c-2db7a3736bc2 to disappear Jan 11 17:20:36.762: INFO: Pod downwardapi-volume-04418aff-f66f-43b6-861c-2db7a3736bc2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:20:36.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-611" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":309,"completed":160,"skipped":2662,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:20:36.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:20:38.956: INFO: Deleting pod "var-expansion-7e050d98-ea5e-4b01-9251-853d280ed324" in namespace "var-expansion-3484" Jan 11 17:20:38.963: INFO: Wait up to 5m0s for pod "var-expansion-7e050d98-ea5e-4b01-9251-853d280ed324" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:21:30.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3484" for this suite. • [SLOW TEST:54.220 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":309,"completed":161,"skipped":2664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:21:31.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jan 11 17:21:31.112: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jan 11 17:21:31.132: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 11 17:21:31.133: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jan 11 17:21:31.171: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 11 17:21:31.171: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jan 11 17:21:31.509: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jan 11 17:21:31.509: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jan 11 17:21:38.697: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:21:38.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-922" for this suite. • [SLOW TEST:7.717 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":309,"completed":162,"skipped":2710,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:21:38.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-1c7a3105-6077-4d8f-8135-c952714c6a2c STEP: Creating a pod to test consume secrets Jan 11 17:21:38.866: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376" in namespace "projected-9301" to be "Succeeded or Failed" Jan 11 17:21:38.910: INFO: Pod "pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376": Phase="Pending", Reason="", readiness=false. Elapsed: 43.233563ms Jan 11 17:21:43.740: INFO: Pod "pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376": Phase="Pending", Reason="", readiness=false. Elapsed: 4.873944032s Jan 11 17:21:45.866: INFO: Pod "pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376": Phase="Pending", Reason="", readiness=false. Elapsed: 6.999341585s Jan 11 17:21:47.875: INFO: Pod "pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.008461844s STEP: Saw pod success Jan 11 17:21:47.875: INFO: Pod "pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376" satisfied condition "Succeeded or Failed" Jan 11 17:21:47.880: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376 container projected-secret-volume-test: STEP: delete the pod Jan 11 17:21:48.016: INFO: Waiting for pod pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376 to disappear Jan 11 17:21:48.037: INFO: Pod pod-projected-secrets-92b597fe-2b8d-47ee-9d36-0990c854c376 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:21:48.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9301" for this suite. • [SLOW TEST:9.343 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":163,"skipped":2716,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:21:48.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:21:48.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1288 create -f -' Jan 11 17:21:50.886: INFO: stderr: "" Jan 11 17:21:50.886: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 11 17:21:50.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1288 create -f -' Jan 11 17:21:54.169: INFO: stderr: "" Jan 11 17:21:54.169: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 11 17:21:55.178: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 17:21:55.179: INFO: Found 1 / 1 Jan 11 17:21:55.179: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 11 17:21:55.185: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 17:21:55.185: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 17:21:55.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1288 describe pod agnhost-primary-nfzgw' Jan 11 17:21:56.483: INFO: stderr: "" Jan 11 17:21:56.484: INFO: stdout: "Name: agnhost-primary-nfzgw\nNamespace: kubectl-1288\nPriority: 0\nNode: leguer-worker2/172.18.0.12\nStart Time: Mon, 11 Jan 2021 17:21:50 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.72\nIPs:\n IP: 10.244.1.72\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://76f481435c9823d1050fddd40395d8cedfe01d44660fab93ed349d6c26a8daee\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 Jan 2021 17:21:53 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dfgc7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dfgc7:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dfgc7\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-1288/agnhost-primary-nfzgw to leguer-worker2\n Normal Pulled 4s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 3s kubelet Created container agnhost-primary\n Normal Started 3s kubelet Started container agnhost-primary\n" Jan 11 17:21:56.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1288 describe rc agnhost-primary' Jan 11 17:21:57.822: INFO: stderr: "" Jan 11 17:21:57.823: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1288\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-primary-nfzgw\n" Jan 11 17:21:57.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1288 describe service agnhost-primary' Jan 11 17:21:59.072: INFO: stderr: "" Jan 11 17:21:59.072: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1288\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.96.68.132\nIPs: 10.96.68.132\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.72:6379\nSession Affinity: None\nEvents: \n" Jan 11 17:21:59.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1288 describe node leguer-control-plane' Jan 11 17:22:00.395: INFO: stderr: "" Jan 11 17:22:00.395: INFO: stdout: "Name: leguer-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=leguer-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 10 Jan 2021 17:37:43 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: leguer-control-plane\n AcquireTime: \n RenewTime: Mon, 11 Jan 2021 17:21:53 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 Jan 2021 17:20:37 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 Jan 2021 17:20:37 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 Jan 2021 17:20:37 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 Jan 2021 17:20:37 +0000 Sun, 10 Jan 2021 17:38:11 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.17\n Hostname: leguer-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 5f1cb3b1931a44e6bb33804f4b6ca7e5\n System UUID: c2287e83-2c9f-458f-8294-12965d8d5e30\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.20.0\n Kube-Proxy Version: v1.20.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/leguer/leguer-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-74ff55c5b-flmf7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 23h\n kube-system coredns-74ff55c5b-whxn7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 23h\n kube-system etcd-leguer-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 23h\n kube-system kindnet-rjz52 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 23h\n kube-system kube-apiserver-leguer-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 23h\n kube-system kube-controller-manager-leguer-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 23h\n kube-system kube-proxy-chqjl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h\n kube-system kube-scheduler-leguer-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 23h\n local-path-storage local-path-provisioner-78776bfc44-45fhs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 950m (5%) 100m (0%)\n memory 290Mi (0%) 390Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jan 11 17:22:00.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1288 describe namespace kubectl-1288' Jan 11 17:22:01.624: INFO: stderr: "" Jan 11 17:22:01.624: INFO: stdout: "Name: kubectl-1288\nLabels: e2e-framework=kubectl\n e2e-run=477e40b0-d99a-437b-90ad-78bfdbdf6d1f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:22:01.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1288" for this suite. • [SLOW TEST:13.564 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1090 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":309,"completed":164,"skipped":2732,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:22:01.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events Jan 11 17:22:03.643: INFO: created test-event-1 Jan 11 17:22:03.667: INFO: created test-event-2 Jan 11 17:22:03.697: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Jan 11 17:22:03.771: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Jan 11 17:22:03.807: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:22:03.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7134" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":309,"completed":165,"skipped":2743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:22:03.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2083.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2083.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2083.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2083.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2083.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2083.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 17:22:12.131: INFO: DNS probes using dns-2083/dns-test-5649f3b6-b11e-4109-81f5-dfd3e1b9eb39 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:22:12.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2083" for this suite. • [SLOW TEST:8.926 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":309,"completed":166,"skipped":2767,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:22:12.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 11 17:22:17.559: INFO: Successfully updated pod "labelsupdate55abdaae-caae-4689-a4c6-7c1b21338c90" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:22:21.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9729" for this suite. • [SLOW TEST:8.799 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":309,"completed":167,"skipped":2777,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:22:21.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 17:22:30.785: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 17:22:32.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982550, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982550, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982550, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982550, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 17:22:34.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982550, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982550, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982550, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745982550, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 17:22:37.861: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:22:37.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1397-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:22:39.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1031" for this suite. STEP: Destroying namespace "webhook-1031-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:17.557 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":309,"completed":168,"skipped":2783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:22:39.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 11 17:22:39.257: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:22:47.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9724" for this suite. • [SLOW TEST:8.068 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":309,"completed":169,"skipped":2830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:22:47.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 11 17:22:47.398: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3182 40bb8f6f-a1ac-422c-b36e-26733bddb61b 204131 0 2021-01-11 17:22:47 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-11 17:22:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 17:22:47.399: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3182 40bb8f6f-a1ac-422c-b36e-26733bddb61b 204132 0 2021-01-11 17:22:47 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-11 17:22:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:22:47.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3182" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":309,"completed":170,"skipped":2891,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:22:47.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-b106307e-f158-4f0f-b97a-e26a9287d2fe STEP: Creating configMap with name cm-test-opt-upd-fefc4a0a-c8ca-4048-8fd4-febf64bad22b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b106307e-f158-4f0f-b97a-e26a9287d2fe STEP: Updating configmap cm-test-opt-upd-fefc4a0a-c8ca-4048-8fd4-febf64bad22b STEP: Creating configMap with name cm-test-opt-create-d5fc689b-e16b-4765-83af-862247c9d8ea STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:24:06.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7268" for this suite. • [SLOW TEST:79.026 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":171,"skipped":2914,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:24:06.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:24:06.527: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 11 17:24:08.642: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:24:08.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5326" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":309,"completed":172,"skipped":2918,"failed":0} SSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:24:08.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:24:10.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8651" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":309,"completed":173,"skipped":2923,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:24:10.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 17:24:11.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027" in namespace "downward-api-6548" to be "Succeeded or Failed" Jan 11 17:24:11.502: INFO: Pod "downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027": Phase="Pending", Reason="", readiness=false. Elapsed: 32.683918ms Jan 11 17:24:13.510: INFO: Pod "downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041280672s Jan 11 17:24:15.695: INFO: Pod "downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225826593s Jan 11 17:24:17.701: INFO: Pod "downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027": Phase="Running", Reason="", readiness=true. Elapsed: 6.232108651s Jan 11 17:24:19.718: INFO: Pod "downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.24946907s STEP: Saw pod success Jan 11 17:24:19.719: INFO: Pod "downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027" satisfied condition "Succeeded or Failed" Jan 11 17:24:19.723: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027 container client-container: STEP: delete the pod Jan 11 17:24:19.767: INFO: Waiting for pod downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027 to disappear Jan 11 17:24:19.799: INFO: Pod downwardapi-volume-74eefc1f-741f-49ae-942c-6a1787504027 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:24:19.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6548" for this suite. • [SLOW TEST:9.167 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":309,"completed":174,"skipped":2932,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:24:19.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4149.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4149.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 17:24:25.977: INFO: DNS probes using dns-test-64b2ecf7-35c8-442c-b8e6-89a716387801 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4149.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4149.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 17:24:36.124: INFO: File wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local from pod dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 17:24:36.129: INFO: File jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local from pod dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 17:24:36.129: INFO: Lookups using dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d failed for: [wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local] Jan 11 17:24:41.138: INFO: File wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local from pod dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 17:24:41.143: INFO: File jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local from pod dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 17:24:41.143: INFO: Lookups using dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d failed for: [wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local] Jan 11 17:24:46.136: INFO: File wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local from pod dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 17:24:46.140: INFO: File jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local from pod dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 17:24:46.140: INFO: Lookups using dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d failed for: [wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local] Jan 11 17:24:51.138: INFO: File wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local from pod dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 17:24:51.143: INFO: File jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local from pod dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 17:24:51.143: INFO: Lookups using dns-4149/dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d failed for: [wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local] Jan 11 17:24:56.143: INFO: DNS probes using dns-test-e42a7be9-be5f-40df-9f57-e620efe23a4d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4149.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4149.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4149.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 17:25:02.833: INFO: DNS probes using dns-test-070e243e-a695-4f46-8a5c-b131f95b66f3 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:25:02.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4149" for this suite. • [SLOW TEST:43.176 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":309,"completed":175,"skipped":2977,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:25:02.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:25:14.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6772" for this suite. • [SLOW TEST:11.441 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":309,"completed":176,"skipped":2987,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:25:14.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-secret-hmvq STEP: Creating a pod to test atomic-volume-subpath Jan 11 17:25:14.598: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hmvq" in namespace "subpath-3673" to be "Succeeded or Failed" Jan 11 17:25:14.637: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Pending", Reason="", readiness=false. Elapsed: 38.601303ms Jan 11 17:25:16.645: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046470815s Jan 11 17:25:18.653: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 4.054528574s Jan 11 17:25:20.662: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 6.063741216s Jan 11 17:25:22.671: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 8.072653762s Jan 11 17:25:24.680: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 10.081361423s Jan 11 17:25:26.689: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 12.090821685s Jan 11 17:25:28.699: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 14.100240085s Jan 11 17:25:30.706: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 16.108099748s Jan 11 17:25:32.714: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 18.115640468s Jan 11 17:25:34.723: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 20.124972797s Jan 11 17:25:36.732: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Running", Reason="", readiness=true. Elapsed: 22.134180602s Jan 11 17:25:38.740: INFO: Pod "pod-subpath-test-secret-hmvq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.141789974s STEP: Saw pod success Jan 11 17:25:38.740: INFO: Pod "pod-subpath-test-secret-hmvq" satisfied condition "Succeeded or Failed" Jan 11 17:25:38.745: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-secret-hmvq container test-container-subpath-secret-hmvq: STEP: delete the pod Jan 11 17:25:38.815: INFO: Waiting for pod pod-subpath-test-secret-hmvq to disappear Jan 11 17:25:38.836: INFO: Pod pod-subpath-test-secret-hmvq no longer exists STEP: Deleting pod pod-subpath-test-secret-hmvq Jan 11 17:25:38.836: INFO: Deleting pod "pod-subpath-test-secret-hmvq" in namespace "subpath-3673" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:25:38.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3673" for this suite. • [SLOW TEST:24.412 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":309,"completed":177,"skipped":3009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:25:38.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: creating the pod Jan 11 17:25:38.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8741 create -f -' Jan 11 17:25:44.631: INFO: stderr: "" Jan 11 17:25:44.632: INFO: stdout: "pod/pause created\n" Jan 11 17:25:44.632: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 11 17:25:44.632: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8741" to be "running and ready" Jan 11 17:25:44.652: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 20.051779ms Jan 11 17:25:46.661: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029111496s Jan 11 17:25:48.670: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.037752466s Jan 11 17:25:48.670: INFO: Pod "pause" satisfied condition "running and ready" Jan 11 17:25:48.670: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: adding the label testing-label with value testing-label-value to a pod Jan 11 17:25:48.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8741 label pods pause testing-label=testing-label-value' Jan 11 17:25:49.869: INFO: stderr: "" Jan 11 17:25:49.869: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 11 17:25:49.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8741 get pod pause -L testing-label' Jan 11 17:25:51.093: INFO: stderr: "" Jan 11 17:25:51.093: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 11 17:25:51.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8741 label pods pause testing-label-' Jan 11 17:25:52.246: INFO: stderr: "" Jan 11 17:25:52.246: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 11 17:25:52.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8741 get pod pause -L testing-label' Jan 11 17:25:53.424: INFO: stderr: "" Jan 11 17:25:53.424: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 STEP: using delete to clean up resources Jan 11 17:25:53.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8741 delete --grace-period=0 --force -f -' Jan 11 17:25:54.644: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 17:25:54.644: INFO: stdout: "pod \"pause\" force deleted\n" Jan 11 17:25:54.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8741 get rc,svc -l name=pause --no-headers' Jan 11 17:25:55.854: INFO: stderr: "No resources found in kubectl-8741 namespace.\n" Jan 11 17:25:55.854: INFO: stdout: "" Jan 11 17:25:55.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8741 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 17:25:57.263: INFO: stderr: "" Jan 11 17:25:57.263: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:25:57.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8741" for this suite. • [SLOW TEST:18.425 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1312 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":309,"completed":178,"skipped":3056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:25:57.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-ab7a2e83-1e0d-400e-93ae-533c78241d88 STEP: Creating a pod to test consume secrets Jan 11 17:25:57.403: INFO: Waiting up to 5m0s for pod "pod-secrets-cb5467c3-7088-41b0-8ac2-7c22cb65d7bd" in namespace "secrets-1902" to be "Succeeded or Failed" Jan 11 17:25:57.428: INFO: Pod "pod-secrets-cb5467c3-7088-41b0-8ac2-7c22cb65d7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.741593ms Jan 11 17:25:59.442: INFO: Pod "pod-secrets-cb5467c3-7088-41b0-8ac2-7c22cb65d7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03890039s Jan 11 17:26:01.479: INFO: Pod "pod-secrets-cb5467c3-7088-41b0-8ac2-7c22cb65d7bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075901364s STEP: Saw pod success Jan 11 17:26:01.479: INFO: Pod "pod-secrets-cb5467c3-7088-41b0-8ac2-7c22cb65d7bd" satisfied condition "Succeeded or Failed" Jan 11 17:26:01.486: INFO: Trying to get logs from node leguer-worker pod pod-secrets-cb5467c3-7088-41b0-8ac2-7c22cb65d7bd container secret-volume-test: STEP: delete the pod Jan 11 17:26:01.517: INFO: Waiting for pod pod-secrets-cb5467c3-7088-41b0-8ac2-7c22cb65d7bd to disappear Jan 11 17:26:01.533: INFO: Pod pod-secrets-cb5467c3-7088-41b0-8ac2-7c22cb65d7bd no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:26:01.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1902" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":179,"skipped":3160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:26:01.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2395 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2395 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2395 Jan 11 17:26:01.806: INFO: Found 0 stateful pods, waiting for 1 Jan 11 17:26:11.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 11 17:26:11.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:26:13.455: INFO: stderr: "I0111 17:26:13.285144 2421 log.go:181] (0x2752230) (0x2752310) Create stream\nI0111 17:26:13.287685 2421 log.go:181] (0x2752230) (0x2752310) Stream added, broadcasting: 1\nI0111 17:26:13.305814 2421 log.go:181] (0x2752230) Reply frame received for 1\nI0111 17:26:13.306277 2421 log.go:181] (0x2752230) (0x2752380) Create stream\nI0111 17:26:13.306340 2421 log.go:181] (0x2752230) (0x2752380) Stream added, broadcasting: 3\nI0111 17:26:13.307652 2421 log.go:181] (0x2752230) Reply frame received for 3\nI0111 17:26:13.307885 2421 log.go:181] (0x2752230) (0x2b32070) Create stream\nI0111 17:26:13.307949 2421 log.go:181] (0x2752230) (0x2b32070) Stream added, broadcasting: 5\nI0111 17:26:13.309339 2421 log.go:181] (0x2752230) Reply frame received for 5\nI0111 17:26:13.386392 2421 log.go:181] (0x2752230) Data frame received for 5\nI0111 17:26:13.386671 2421 log.go:181] (0x2b32070) (5) Data frame handling\nI0111 17:26:13.387099 2421 log.go:181] (0x2b32070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:26:13.438328 2421 log.go:181] (0x2752230) Data frame received for 3\nI0111 17:26:13.438451 2421 log.go:181] (0x2752380) (3) Data frame handling\nI0111 17:26:13.438541 2421 log.go:181] (0x2752380) (3) Data frame sent\nI0111 17:26:13.438610 2421 log.go:181] (0x2752230) Data frame received for 3\nI0111 17:26:13.438662 2421 log.go:181] (0x2752380) (3) Data frame handling\nI0111 17:26:13.438924 2421 log.go:181] (0x2752230) Data frame received for 5\nI0111 17:26:13.439156 2421 log.go:181] (0x2b32070) (5) Data frame handling\nI0111 17:26:13.440329 2421 log.go:181] (0x2752230) Data frame received for 1\nI0111 17:26:13.440399 2421 log.go:181] (0x2752310) (1) Data frame handling\nI0111 17:26:13.440512 2421 log.go:181] (0x2752310) (1) Data frame sent\nI0111 17:26:13.441661 2421 log.go:181] (0x2752230) (0x2752310) Stream removed, broadcasting: 1\nI0111 17:26:13.444386 2421 log.go:181] (0x2752230) Go away received\nI0111 17:26:13.446124 2421 log.go:181] (0x2752230) (0x2752310) Stream removed, broadcasting: 1\nI0111 17:26:13.446374 2421 log.go:181] (0x2752230) (0x2752380) Stream removed, broadcasting: 3\nI0111 17:26:13.446627 2421 log.go:181] (0x2752230) (0x2b32070) Stream removed, broadcasting: 5\n" Jan 11 17:26:13.456: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:26:13.457: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:26:13.465: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 11 17:26:23.474: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 17:26:23.475: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 17:26:23.506: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999832125s Jan 11 17:26:24.518: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.986991915s Jan 11 17:26:25.528: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975413769s Jan 11 17:26:26.537: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.966000032s Jan 11 17:26:27.546: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.95648891s Jan 11 17:26:28.555: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.948259099s Jan 11 17:26:29.565: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.939308454s Jan 11 17:26:30.573: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.928562418s Jan 11 17:26:31.582: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.920536731s Jan 11 17:26:32.592: INFO: Verifying statefulset ss doesn't scale past 1 for another 911.971657ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2395 Jan 11 17:26:33.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:26:35.087: INFO: stderr: "I0111 17:26:34.946651 2441 log.go:181] (0x311c000) (0x311c070) Create stream\nI0111 17:26:34.950420 2441 log.go:181] (0x311c000) (0x311c070) Stream added, broadcasting: 1\nI0111 17:26:34.969402 2441 log.go:181] (0x311c000) Reply frame received for 1\nI0111 17:26:34.970009 2441 log.go:181] (0x311c000) (0x32100e0) Create stream\nI0111 17:26:34.970100 2441 log.go:181] (0x311c000) (0x32100e0) Stream added, broadcasting: 3\nI0111 17:26:34.971639 2441 log.go:181] (0x311c000) Reply frame received for 3\nI0111 17:26:34.971864 2441 log.go:181] (0x311c000) (0x28701c0) Create stream\nI0111 17:26:34.971927 2441 log.go:181] (0x311c000) (0x28701c0) Stream added, broadcasting: 5\nI0111 17:26:34.972996 2441 log.go:181] (0x311c000) Reply frame received for 5\nI0111 17:26:35.069185 2441 log.go:181] (0x311c000) Data frame received for 3\nI0111 17:26:35.069520 2441 log.go:181] (0x311c000) Data frame received for 1\nI0111 17:26:35.069873 2441 log.go:181] (0x32100e0) (3) Data frame handling\nI0111 17:26:35.070000 2441 log.go:181] (0x311c070) (1) Data frame handling\nI0111 17:26:35.070204 2441 log.go:181] (0x311c000) Data frame received for 5\nI0111 17:26:35.070386 2441 log.go:181] (0x28701c0) (5) Data frame handling\nI0111 17:26:35.071133 2441 log.go:181] (0x32100e0) (3) Data frame sent\nI0111 17:26:35.071327 2441 log.go:181] (0x311c070) (1) Data frame sent\nI0111 17:26:35.071669 2441 log.go:181] (0x28701c0) (5) Data frame sent\nI0111 17:26:35.072048 2441 log.go:181] (0x311c000) Data frame received for 3\nI0111 17:26:35.072192 2441 log.go:181] (0x32100e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0111 17:26:35.073067 2441 log.go:181] (0x311c000) Data frame received for 5\nI0111 17:26:35.073228 2441 log.go:181] (0x28701c0) (5) Data frame handling\nI0111 17:26:35.074300 2441 log.go:181] (0x311c000) (0x311c070) Stream removed, broadcasting: 1\nI0111 17:26:35.076760 2441 log.go:181] (0x311c000) Go away received\nI0111 17:26:35.078881 2441 log.go:181] (0x311c000) (0x311c070) Stream removed, broadcasting: 1\nI0111 17:26:35.079125 2441 log.go:181] (0x311c000) (0x32100e0) Stream removed, broadcasting: 3\nI0111 17:26:35.079320 2441 log.go:181] (0x311c000) (0x28701c0) Stream removed, broadcasting: 5\n" Jan 11 17:26:35.087: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 17:26:35.087: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 17:26:35.094: INFO: Found 1 stateful pods, waiting for 3 Jan 11 17:26:45.107: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:26:45.107: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:26:45.107: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 11 17:26:45.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:26:46.552: INFO: stderr: "I0111 17:26:46.411153 2462 log.go:181] (0x2516c40) (0x2516ee0) Create stream\nI0111 17:26:46.415949 2462 log.go:181] (0x2516c40) (0x2516ee0) Stream added, broadcasting: 1\nI0111 17:26:46.433684 2462 log.go:181] (0x2516c40) Reply frame received for 1\nI0111 17:26:46.434218 2462 log.go:181] (0x2516c40) (0x2baa540) Create stream\nI0111 17:26:46.434316 2462 log.go:181] (0x2516c40) (0x2baa540) Stream added, broadcasting: 3\nI0111 17:26:46.435969 2462 log.go:181] (0x2516c40) Reply frame received for 3\nI0111 17:26:46.436304 2462 log.go:181] (0x2516c40) (0x241a070) Create stream\nI0111 17:26:46.436382 2462 log.go:181] (0x2516c40) (0x241a070) Stream added, broadcasting: 5\nI0111 17:26:46.437493 2462 log.go:181] (0x2516c40) Reply frame received for 5\nI0111 17:26:46.532223 2462 log.go:181] (0x2516c40) Data frame received for 3\nI0111 17:26:46.532505 2462 log.go:181] (0x2516c40) Data frame received for 1\nI0111 17:26:46.532824 2462 log.go:181] (0x2516c40) Data frame received for 5\nI0111 17:26:46.533107 2462 log.go:181] (0x2baa540) (3) Data frame handling\nI0111 17:26:46.533364 2462 log.go:181] (0x241a070) (5) Data frame handling\nI0111 17:26:46.533711 2462 log.go:181] (0x2516ee0) (1) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:26:46.534641 2462 log.go:181] (0x2baa540) (3) Data frame sent\nI0111 17:26:46.534771 2462 log.go:181] (0x241a070) (5) Data frame sent\nI0111 17:26:46.535889 2462 log.go:181] (0x2516c40) Data frame received for 5\nI0111 17:26:46.535993 2462 log.go:181] (0x2516c40) Data frame received for 3\nI0111 17:26:46.536134 2462 log.go:181] (0x2baa540) (3) Data frame handling\nI0111 17:26:46.536266 2462 log.go:181] (0x2516ee0) (1) Data frame sent\nI0111 17:26:46.536482 2462 log.go:181] (0x241a070) (5) Data frame handling\nI0111 17:26:46.538564 2462 log.go:181] (0x2516c40) (0x2516ee0) Stream removed, broadcasting: 1\nI0111 17:26:46.538877 2462 log.go:181] (0x2516c40) Go away received\nI0111 17:26:46.542734 2462 log.go:181] (0x2516c40) (0x2516ee0) Stream removed, broadcasting: 1\nI0111 17:26:46.543055 2462 log.go:181] (0x2516c40) (0x2baa540) Stream removed, broadcasting: 3\nI0111 17:26:46.543291 2462 log.go:181] (0x2516c40) (0x241a070) Stream removed, broadcasting: 5\n" Jan 11 17:26:46.553: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:26:46.553: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:26:46.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:26:48.034: INFO: stderr: "I0111 17:26:47.904544 2483 log.go:181] (0x24a2c40) (0x24a2ee0) Create stream\nI0111 17:26:47.906659 2483 log.go:181] (0x24a2c40) (0x24a2ee0) Stream added, broadcasting: 1\nI0111 17:26:47.922785 2483 log.go:181] (0x24a2c40) Reply frame received for 1\nI0111 17:26:47.923303 2483 log.go:181] (0x24a2c40) (0x2b0a0e0) Create stream\nI0111 17:26:47.923380 2483 log.go:181] (0x24a2c40) (0x2b0a0e0) Stream added, broadcasting: 3\nI0111 17:26:47.924504 2483 log.go:181] (0x24a2c40) Reply frame received for 3\nI0111 17:26:47.924730 2483 log.go:181] (0x24a2c40) (0x2e58070) Create stream\nI0111 17:26:47.924791 2483 log.go:181] (0x24a2c40) (0x2e58070) Stream added, broadcasting: 5\nI0111 17:26:47.925772 2483 log.go:181] (0x24a2c40) Reply frame received for 5\nI0111 17:26:47.983158 2483 log.go:181] (0x24a2c40) Data frame received for 5\nI0111 17:26:47.983472 2483 log.go:181] (0x2e58070) (5) Data frame handling\nI0111 17:26:47.983994 2483 log.go:181] (0x2e58070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:26:48.017279 2483 log.go:181] (0x24a2c40) Data frame received for 3\nI0111 17:26:48.017452 2483 log.go:181] (0x24a2c40) Data frame received for 5\nI0111 17:26:48.017660 2483 log.go:181] (0x2e58070) (5) Data frame handling\nI0111 17:26:48.017861 2483 log.go:181] (0x2b0a0e0) (3) Data frame handling\nI0111 17:26:48.018043 2483 log.go:181] (0x2b0a0e0) (3) Data frame sent\nI0111 17:26:48.018202 2483 log.go:181] (0x24a2c40) Data frame received for 3\nI0111 17:26:48.018340 2483 log.go:181] (0x2b0a0e0) (3) Data frame handling\nI0111 17:26:48.019024 2483 log.go:181] (0x24a2c40) Data frame received for 1\nI0111 17:26:48.019133 2483 log.go:181] (0x24a2ee0) (1) Data frame handling\nI0111 17:26:48.019292 2483 log.go:181] (0x24a2ee0) (1) Data frame sent\nI0111 17:26:48.020291 2483 log.go:181] (0x24a2c40) (0x24a2ee0) Stream removed, broadcasting: 1\nI0111 17:26:48.022218 2483 log.go:181] (0x24a2c40) Go away received\nI0111 17:26:48.026051 2483 log.go:181] (0x24a2c40) (0x24a2ee0) Stream removed, broadcasting: 1\nI0111 17:26:48.026227 2483 log.go:181] (0x24a2c40) (0x2b0a0e0) Stream removed, broadcasting: 3\nI0111 17:26:48.026386 2483 log.go:181] (0x24a2c40) (0x2e58070) Stream removed, broadcasting: 5\n" Jan 11 17:26:48.035: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:26:48.035: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:26:48.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:26:49.610: INFO: stderr: "I0111 17:26:49.441286 2504 log.go:181] (0x24a68c0) (0x24a6af0) Create stream\nI0111 17:26:49.447459 2504 log.go:181] (0x24a68c0) (0x24a6af0) Stream added, broadcasting: 1\nI0111 17:26:49.471061 2504 log.go:181] (0x24a68c0) Reply frame received for 1\nI0111 17:26:49.471567 2504 log.go:181] (0x24a68c0) (0x283a070) Create stream\nI0111 17:26:49.471634 2504 log.go:181] (0x24a68c0) (0x283a070) Stream added, broadcasting: 3\nI0111 17:26:49.473335 2504 log.go:181] (0x24a68c0) Reply frame received for 3\nI0111 17:26:49.473908 2504 log.go:181] (0x24a68c0) (0x25d0070) Create stream\nI0111 17:26:49.474017 2504 log.go:181] (0x24a68c0) (0x25d0070) Stream added, broadcasting: 5\nI0111 17:26:49.475366 2504 log.go:181] (0x24a68c0) Reply frame received for 5\nI0111 17:26:49.565297 2504 log.go:181] (0x24a68c0) Data frame received for 5\nI0111 17:26:49.565558 2504 log.go:181] (0x25d0070) (5) Data frame handling\nI0111 17:26:49.566040 2504 log.go:181] (0x25d0070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:26:49.592210 2504 log.go:181] (0x24a68c0) Data frame received for 3\nI0111 17:26:49.592431 2504 log.go:181] (0x283a070) (3) Data frame handling\nI0111 17:26:49.592625 2504 log.go:181] (0x24a68c0) Data frame received for 5\nI0111 17:26:49.592823 2504 log.go:181] (0x25d0070) (5) Data frame handling\nI0111 17:26:49.593080 2504 log.go:181] (0x283a070) (3) Data frame sent\nI0111 17:26:49.593220 2504 log.go:181] (0x24a68c0) Data frame received for 3\nI0111 17:26:49.593352 2504 log.go:181] (0x283a070) (3) Data frame handling\nI0111 17:26:49.593925 2504 log.go:181] (0x24a68c0) Data frame received for 1\nI0111 17:26:49.594061 2504 log.go:181] (0x24a6af0) (1) Data frame handling\nI0111 17:26:49.594189 2504 log.go:181] (0x24a6af0) (1) Data frame sent\nI0111 17:26:49.595186 2504 log.go:181] (0x24a68c0) (0x24a6af0) Stream removed, broadcasting: 1\nI0111 17:26:49.597762 2504 log.go:181] (0x24a68c0) Go away received\nI0111 17:26:49.601222 2504 log.go:181] (0x24a68c0) (0x24a6af0) Stream removed, broadcasting: 1\nI0111 17:26:49.601503 2504 log.go:181] (0x24a68c0) (0x283a070) Stream removed, broadcasting: 3\nI0111 17:26:49.601879 2504 log.go:181] (0x24a68c0) (0x25d0070) Stream removed, broadcasting: 5\n" Jan 11 17:26:49.611: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:26:49.611: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:26:49.612: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 17:26:49.618: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 11 17:26:59.634: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 17:26:59.635: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 11 17:26:59.635: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 11 17:26:59.658: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999975894s Jan 11 17:27:00.668: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989622399s Jan 11 17:27:01.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979217052s Jan 11 17:27:02.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967538009s Jan 11 17:27:03.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.95694101s Jan 11 17:27:04.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.94684674s Jan 11 17:27:05.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.939430045s Jan 11 17:27:06.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.92835992s Jan 11 17:27:07.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.918274022s Jan 11 17:27:08.768: INFO: Verifying statefulset ss doesn't scale past 3 for another 906.826022ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2395 Jan 11 17:27:09.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:27:11.356: INFO: stderr: "I0111 17:27:11.238486 2525 log.go:181] (0x2e9cb60) (0x2e9cbd0) Create stream\nI0111 17:27:11.240460 2525 log.go:181] (0x2e9cb60) (0x2e9cbd0) Stream added, broadcasting: 1\nI0111 17:27:11.251319 2525 log.go:181] (0x2e9cb60) Reply frame received for 1\nI0111 17:27:11.251862 2525 log.go:181] (0x2e9cb60) (0x2ab40e0) Create stream\nI0111 17:27:11.251937 2525 log.go:181] (0x2e9cb60) (0x2ab40e0) Stream added, broadcasting: 3\nI0111 17:27:11.253628 2525 log.go:181] (0x2e9cb60) Reply frame received for 3\nI0111 17:27:11.253841 2525 log.go:181] (0x2e9cb60) (0x2e9cd90) Create stream\nI0111 17:27:11.253899 2525 log.go:181] (0x2e9cb60) (0x2e9cd90) Stream added, broadcasting: 5\nI0111 17:27:11.255514 2525 log.go:181] (0x2e9cb60) Reply frame received for 5\nI0111 17:27:11.338595 2525 log.go:181] (0x2e9cb60) Data frame received for 3\nI0111 17:27:11.339595 2525 log.go:181] (0x2ab40e0) (3) Data frame handling\nI0111 17:27:11.341263 2525 log.go:181] (0x2ab40e0) (3) Data frame sent\nI0111 17:27:11.342246 2525 log.go:181] (0x2e9cb60) Data frame received for 3\nI0111 17:27:11.342495 2525 log.go:181] (0x2ab40e0) (3) Data frame handling\nI0111 17:27:11.342833 2525 log.go:181] (0x2e9cb60) Data frame received for 5\nI0111 17:27:11.343039 2525 log.go:181] (0x2e9cd90) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0111 17:27:11.343240 2525 log.go:181] (0x2e9cd90) (5) Data frame sent\nI0111 17:27:11.343515 2525 log.go:181] (0x2e9cb60) Data frame received for 5\nI0111 17:27:11.343676 2525 log.go:181] (0x2e9cd90) (5) Data frame handling\nI0111 17:27:11.345082 2525 log.go:181] (0x2e9cb60) Data frame received for 1\nI0111 17:27:11.345201 2525 log.go:181] (0x2e9cbd0) (1) Data frame handling\nI0111 17:27:11.345280 2525 log.go:181] (0x2e9cbd0) (1) Data frame sent\nI0111 17:27:11.345957 2525 log.go:181] (0x2e9cb60) (0x2e9cbd0) Stream removed, broadcasting: 1\nI0111 17:27:11.347826 2525 log.go:181] (0x2e9cb60) Go away received\nI0111 17:27:11.349659 2525 log.go:181] (0x2e9cb60) (0x2e9cbd0) Stream removed, broadcasting: 1\nI0111 17:27:11.349800 2525 log.go:181] (0x2e9cb60) (0x2ab40e0) Stream removed, broadcasting: 3\nI0111 17:27:11.349913 2525 log.go:181] (0x2e9cb60) (0x2e9cd90) Stream removed, broadcasting: 5\n" Jan 11 17:27:11.357: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 17:27:11.357: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 17:27:11.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:27:12.796: INFO: stderr: "I0111 17:27:12.692677 2545 log.go:181] (0x2a50e00) (0x2a50e70) Create stream\nI0111 17:27:12.696687 2545 log.go:181] (0x2a50e00) (0x2a50e70) Stream added, broadcasting: 1\nI0111 17:27:12.715143 2545 log.go:181] (0x2a50e00) Reply frame received for 1\nI0111 17:27:12.715754 2545 log.go:181] (0x2a50e00) (0x27442a0) Create stream\nI0111 17:27:12.715843 2545 log.go:181] (0x2a50e00) (0x27442a0) Stream added, broadcasting: 3\nI0111 17:27:12.717449 2545 log.go:181] (0x2a50e00) Reply frame received for 3\nI0111 17:27:12.717814 2545 log.go:181] (0x2a50e00) (0x27444d0) Create stream\nI0111 17:27:12.717880 2545 log.go:181] (0x2a50e00) (0x27444d0) Stream added, broadcasting: 5\nI0111 17:27:12.718998 2545 log.go:181] (0x2a50e00) Reply frame received for 5\nI0111 17:27:12.779350 2545 log.go:181] (0x2a50e00) Data frame received for 3\nI0111 17:27:12.779712 2545 log.go:181] (0x2a50e00) Data frame received for 5\nI0111 17:27:12.779999 2545 log.go:181] (0x27444d0) (5) Data frame handling\nI0111 17:27:12.780434 2545 log.go:181] (0x2a50e00) Data frame received for 1\nI0111 17:27:12.780621 2545 log.go:181] (0x2a50e70) (1) Data frame handling\nI0111 17:27:12.780717 2545 log.go:181] (0x27442a0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0111 17:27:12.782077 2545 log.go:181] (0x27442a0) (3) Data frame sent\nI0111 17:27:12.782221 2545 log.go:181] (0x27444d0) (5) Data frame sent\nI0111 17:27:12.782307 2545 log.go:181] (0x2a50e00) Data frame received for 3\nI0111 17:27:12.782399 2545 log.go:181] (0x2a50e70) (1) Data frame sent\nI0111 17:27:12.782604 2545 log.go:181] (0x27442a0) (3) Data frame handling\nI0111 17:27:12.782851 2545 log.go:181] (0x2a50e00) Data frame received for 5\nI0111 17:27:12.783232 2545 log.go:181] (0x2a50e00) (0x2a50e70) Stream removed, broadcasting: 1\nI0111 17:27:12.785748 2545 log.go:181] (0x27444d0) (5) Data frame handling\nI0111 17:27:12.786013 2545 log.go:181] (0x2a50e00) Go away received\nI0111 17:27:12.788015 2545 log.go:181] (0x2a50e00) (0x2a50e70) Stream removed, broadcasting: 1\nI0111 17:27:12.788315 2545 log.go:181] (0x2a50e00) (0x27442a0) Stream removed, broadcasting: 3\nI0111 17:27:12.788521 2545 log.go:181] (0x2a50e00) (0x27444d0) Stream removed, broadcasting: 5\n" Jan 11 17:27:12.797: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 17:27:12.797: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 17:27:12.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:27:14.529: INFO: rc: 1 Jan 11 17:27:14.529: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 11 17:27:24.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:27:26.581: INFO: rc: 1 Jan 11 17:27:26.581: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 11 17:27:36.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:27:37.916: INFO: rc: 1 Jan 11 17:27:37.917: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 11 17:27:47.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:27:49.045: INFO: rc: 1 Jan 11 17:27:49.046: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:27:59.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:28:00.385: INFO: rc: 1 Jan 11 17:28:00.386: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:28:10.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:28:11.591: INFO: rc: 1 Jan 11 17:28:11.591: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:28:21.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:28:22.763: INFO: rc: 1 Jan 11 17:28:22.763: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:28:32.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:28:33.941: INFO: rc: 1 Jan 11 17:28:33.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:28:43.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:28:45.108: INFO: rc: 1 Jan 11 17:28:45.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:28:55.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:28:56.401: INFO: rc: 1 Jan 11 17:28:56.401: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:29:06.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:29:07.564: INFO: rc: 1 Jan 11 17:29:07.565: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:29:17.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:29:18.791: INFO: rc: 1 Jan 11 17:29:18.791: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:29:28.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:29:29.966: INFO: rc: 1 Jan 11 17:29:29.967: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:29:39.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:29:41.170: INFO: rc: 1 Jan 11 17:29:41.170: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:29:51.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:29:52.370: INFO: rc: 1 Jan 11 17:29:52.371: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:30:02.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:30:03.555: INFO: rc: 1 Jan 11 17:30:03.555: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:30:13.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:30:14.691: INFO: rc: 1 Jan 11 17:30:14.692: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:30:24.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:30:25.882: INFO: rc: 1 Jan 11 17:30:25.883: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:30:35.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:30:37.014: INFO: rc: 1 Jan 11 17:30:37.014: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:30:47.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:30:48.208: INFO: rc: 1 Jan 11 17:30:48.208: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:30:58.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:30:59.424: INFO: rc: 1 Jan 11 17:30:59.424: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:31:09.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:31:10.594: INFO: rc: 1 Jan 11 17:31:10.594: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:31:20.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:31:21.866: INFO: rc: 1 Jan 11 17:31:21.866: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:31:31.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:31:33.036: INFO: rc: 1 Jan 11 17:31:33.036: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:31:43.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:31:44.243: INFO: rc: 1 Jan 11 17:31:44.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:31:54.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:31:55.398: INFO: rc: 1 Jan 11 17:31:55.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:32:05.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:32:06.568: INFO: rc: 1 Jan 11 17:32:06.568: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 11 17:32:16.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-2395 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:32:17.780: INFO: rc: 1 Jan 11 17:32:17.780: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Jan 11 17:32:17.781: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 11 17:32:17.795: INFO: Deleting all statefulset in ns statefulset-2395 Jan 11 17:32:17.799: INFO: Scaling statefulset ss to 0 Jan 11 17:32:17.811: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 17:32:17.814: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:32:17.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2395" for this suite. • [SLOW TEST:376.227 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":309,"completed":180,"skipped":3202,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:32:17.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:32:35.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-39" for this suite. • [SLOW TEST:17.201 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":309,"completed":181,"skipped":3214,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:32:35.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-8408 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 17:32:35.149: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 17:32:35.270: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 17:32:37.391: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 17:32:39.279: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:32:41.284: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:32:43.278: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:32:45.277: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:32:47.277: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:32:49.278: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:32:51.279: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:32:53.279: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 17:32:55.278: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 11 17:32:55.306: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 11 17:32:57.313: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 11 17:33:03.433: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 11 17:33:03.434: INFO: Going to poll 10.244.2.62 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jan 11 17:33:03.441: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.62:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8408 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:33:03.441: INFO: >>> kubeConfig: /root/.kube/config I0111 17:33:03.555440 10 log.go:181] (0xa095570) (0xa0955e0) Create stream I0111 17:33:03.555599 10 log.go:181] (0xa095570) (0xa0955e0) Stream added, broadcasting: 1 I0111 17:33:03.559345 10 log.go:181] (0xa095570) Reply frame received for 1 I0111 17:33:03.559521 10 log.go:181] (0xa095570) (0xa99e0e0) Create stream I0111 17:33:03.559614 10 log.go:181] (0xa095570) (0xa99e0e0) Stream added, broadcasting: 3 I0111 17:33:03.560961 10 log.go:181] (0xa095570) Reply frame received for 3 I0111 17:33:03.561078 10 log.go:181] (0xa095570) (0xa095880) Create stream I0111 17:33:03.561141 10 log.go:181] (0xa095570) (0xa095880) Stream added, broadcasting: 5 I0111 17:33:03.562260 10 log.go:181] (0xa095570) Reply frame received for 5 I0111 17:33:03.668236 10 log.go:181] (0xa095570) Data frame received for 3 I0111 17:33:03.668426 10 log.go:181] (0xa095570) Data frame received for 5 I0111 17:33:03.668614 10 log.go:181] (0xa095880) (5) Data frame handling I0111 17:33:03.668827 10 log.go:181] (0xa99e0e0) (3) Data frame handling I0111 17:33:03.669088 10 log.go:181] (0xa99e0e0) (3) Data frame sent I0111 17:33:03.669204 10 log.go:181] (0xa095570) Data frame received for 3 I0111 17:33:03.669306 10 log.go:181] (0xa99e0e0) (3) Data frame handling I0111 17:33:03.669980 10 log.go:181] (0xa095570) Data frame received for 1 I0111 17:33:03.670101 10 log.go:181] (0xa0955e0) (1) Data frame handling I0111 17:33:03.670293 10 log.go:181] (0xa0955e0) (1) Data frame sent I0111 17:33:03.670477 10 log.go:181] (0xa095570) (0xa0955e0) Stream removed, broadcasting: 1 I0111 17:33:03.670598 10 log.go:181] (0xa095570) Go away received I0111 17:33:03.670942 10 log.go:181] (0xa095570) (0xa0955e0) Stream removed, broadcasting: 1 I0111 17:33:03.671022 10 log.go:181] (0xa095570) (0xa99e0e0) Stream removed, broadcasting: 3 I0111 17:33:03.671100 10 log.go:181] (0xa095570) (0xa095880) Stream removed, broadcasting: 5 Jan 11 17:33:03.671: INFO: Found all 1 expected endpoints: [netserver-0] Jan 11 17:33:03.671: INFO: Going to poll 10.244.1.81 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jan 11 17:33:03.676: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.81:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8408 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:33:03.676: INFO: >>> kubeConfig: /root/.kube/config I0111 17:33:03.783141 10 log.go:181] (0x75887e0) (0x75889a0) Create stream I0111 17:33:03.783259 10 log.go:181] (0x75887e0) (0x75889a0) Stream added, broadcasting: 1 I0111 17:33:03.786918 10 log.go:181] (0x75887e0) Reply frame received for 1 I0111 17:33:03.787135 10 log.go:181] (0x75887e0) (0x75890a0) Create stream I0111 17:33:03.787236 10 log.go:181] (0x75887e0) (0x75890a0) Stream added, broadcasting: 3 I0111 17:33:03.788815 10 log.go:181] (0x75887e0) Reply frame received for 3 I0111 17:33:03.789039 10 log.go:181] (0x75887e0) (0x7589500) Create stream I0111 17:33:03.789122 10 log.go:181] (0x75887e0) (0x7589500) Stream added, broadcasting: 5 I0111 17:33:03.790623 10 log.go:181] (0x75887e0) Reply frame received for 5 I0111 17:33:03.851851 10 log.go:181] (0x75887e0) Data frame received for 3 I0111 17:33:03.852069 10 log.go:181] (0x75887e0) Data frame received for 5 I0111 17:33:03.852265 10 log.go:181] (0x7589500) (5) Data frame handling I0111 17:33:03.852382 10 log.go:181] (0x75890a0) (3) Data frame handling I0111 17:33:03.852567 10 log.go:181] (0x75890a0) (3) Data frame sent I0111 17:33:03.852749 10 log.go:181] (0x75887e0) Data frame received for 3 I0111 17:33:03.853099 10 log.go:181] (0x75890a0) (3) Data frame handling I0111 17:33:03.853475 10 log.go:181] (0x75887e0) Data frame received for 1 I0111 17:33:03.853652 10 log.go:181] (0x75889a0) (1) Data frame handling I0111 17:33:03.853855 10 log.go:181] (0x75889a0) (1) Data frame sent I0111 17:33:03.854012 10 log.go:181] (0x75887e0) (0x75889a0) Stream removed, broadcasting: 1 I0111 17:33:03.854176 10 log.go:181] (0x75887e0) Go away received I0111 17:33:03.854487 10 log.go:181] (0x75887e0) (0x75889a0) Stream removed, broadcasting: 1 I0111 17:33:03.854644 10 log.go:181] (0x75887e0) (0x75890a0) Stream removed, broadcasting: 3 I0111 17:33:03.854831 10 log.go:181] (0x75887e0) (0x7589500) Stream removed, broadcasting: 5 Jan 11 17:33:03.854: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:33:03.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8408" for this suite. • [SLOW TEST:28.827 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":182,"skipped":3227,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:33:03.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test service account token: Jan 11 17:33:03.967: INFO: Waiting up to 5m0s for pod "test-pod-b0cf2eba-7f67-4b5e-9412-5e49559a7aaf" in namespace "svcaccounts-4386" to be "Succeeded or Failed" Jan 11 17:33:04.007: INFO: Pod "test-pod-b0cf2eba-7f67-4b5e-9412-5e49559a7aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 40.070269ms Jan 11 17:33:06.015: INFO: Pod "test-pod-b0cf2eba-7f67-4b5e-9412-5e49559a7aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04788176s Jan 11 17:33:08.037: INFO: Pod "test-pod-b0cf2eba-7f67-4b5e-9412-5e49559a7aaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070076615s STEP: Saw pod success Jan 11 17:33:08.037: INFO: Pod "test-pod-b0cf2eba-7f67-4b5e-9412-5e49559a7aaf" satisfied condition "Succeeded or Failed" Jan 11 17:33:08.042: INFO: Trying to get logs from node leguer-worker pod test-pod-b0cf2eba-7f67-4b5e-9412-5e49559a7aaf container agnhost-container: STEP: delete the pod Jan 11 17:33:08.295: INFO: Waiting for pod test-pod-b0cf2eba-7f67-4b5e-9412-5e49559a7aaf to disappear Jan 11 17:33:08.301: INFO: Pod test-pod-b0cf2eba-7f67-4b5e-9412-5e49559a7aaf no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:33:08.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4386" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":309,"completed":183,"skipped":3229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:33:08.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:33:08.434: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 17:33:31.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3583 --namespace=crd-publish-openapi-3583 create -f -' Jan 11 17:33:37.592: INFO: stderr: "" Jan 11 17:33:37.592: INFO: stdout: "e2e-test-crd-publish-openapi-990-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 11 17:33:37.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3583 --namespace=crd-publish-openapi-3583 delete e2e-test-crd-publish-openapi-990-crds test-cr' Jan 11 17:33:38.917: INFO: stderr: "" Jan 11 17:33:38.917: INFO: stdout: "e2e-test-crd-publish-openapi-990-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 11 17:33:38.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3583 --namespace=crd-publish-openapi-3583 apply -f -' Jan 11 17:33:41.261: INFO: stderr: "" Jan 11 17:33:41.261: INFO: stdout: "e2e-test-crd-publish-openapi-990-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 11 17:33:41.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3583 --namespace=crd-publish-openapi-3583 delete e2e-test-crd-publish-openapi-990-crds test-cr' Jan 11 17:33:42.473: INFO: stderr: "" Jan 11 17:33:42.473: INFO: stdout: "e2e-test-crd-publish-openapi-990-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 11 17:33:42.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3583 explain e2e-test-crd-publish-openapi-990-crds' Jan 11 17:33:45.096: INFO: stderr: "" Jan 11 17:33:45.097: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-990-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:34:07.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3583" for this suite. • [SLOW TEST:59.345 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":309,"completed":184,"skipped":3254,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:34:07.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 17:34:07.800: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8369b3b-bd70-4836-acfb-4d37249da761" in namespace "downward-api-3835" to be "Succeeded or Failed" Jan 11 17:34:07.828: INFO: Pod "downwardapi-volume-c8369b3b-bd70-4836-acfb-4d37249da761": Phase="Pending", Reason="", readiness=false. Elapsed: 28.565158ms Jan 11 17:34:09.837: INFO: Pod "downwardapi-volume-c8369b3b-bd70-4836-acfb-4d37249da761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037669468s Jan 11 17:34:11.846: INFO: Pod "downwardapi-volume-c8369b3b-bd70-4836-acfb-4d37249da761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04665273s STEP: Saw pod success Jan 11 17:34:11.847: INFO: Pod "downwardapi-volume-c8369b3b-bd70-4836-acfb-4d37249da761" satisfied condition "Succeeded or Failed" Jan 11 17:34:11.853: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-c8369b3b-bd70-4836-acfb-4d37249da761 container client-container: STEP: delete the pod Jan 11 17:34:11.890: INFO: Waiting for pod downwardapi-volume-c8369b3b-bd70-4836-acfb-4d37249da761 to disappear Jan 11 17:34:11.916: INFO: Pod downwardapi-volume-c8369b3b-bd70-4836-acfb-4d37249da761 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:34:11.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3835" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":309,"completed":185,"skipped":3259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:34:11.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-539 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating stateful set ss in namespace statefulset-539 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-539 Jan 11 17:34:12.125: INFO: Found 0 stateful pods, waiting for 1 Jan 11 17:34:22.134: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 11 17:34:22.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:34:23.645: INFO: stderr: "I0111 17:34:23.491411 3234 log.go:181] (0x29f4070) (0x29f40e0) Create stream\nI0111 17:34:23.493545 3234 log.go:181] (0x29f4070) (0x29f40e0) Stream added, broadcasting: 1\nI0111 17:34:23.511486 3234 log.go:181] (0x29f4070) Reply frame received for 1\nI0111 17:34:23.511917 3234 log.go:181] (0x29f4070) (0x28561c0) Create stream\nI0111 17:34:23.511982 3234 log.go:181] (0x29f4070) (0x28561c0) Stream added, broadcasting: 3\nI0111 17:34:23.513227 3234 log.go:181] (0x29f4070) Reply frame received for 3\nI0111 17:34:23.513431 3234 log.go:181] (0x29f4070) (0x27444d0) Create stream\nI0111 17:34:23.513493 3234 log.go:181] (0x29f4070) (0x27444d0) Stream added, broadcasting: 5\nI0111 17:34:23.514478 3234 log.go:181] (0x29f4070) Reply frame received for 5\nI0111 17:34:23.596375 3234 log.go:181] (0x29f4070) Data frame received for 5\nI0111 17:34:23.596594 3234 log.go:181] (0x27444d0) (5) Data frame handling\nI0111 17:34:23.597107 3234 log.go:181] (0x27444d0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:34:23.625724 3234 log.go:181] (0x29f4070) Data frame received for 3\nI0111 17:34:23.625863 3234 log.go:181] (0x28561c0) (3) Data frame handling\nI0111 17:34:23.625989 3234 log.go:181] (0x28561c0) (3) Data frame sent\nI0111 17:34:23.626193 3234 log.go:181] (0x29f4070) Data frame received for 5\nI0111 17:34:23.626340 3234 log.go:181] (0x27444d0) (5) Data frame handling\nI0111 17:34:23.626692 3234 log.go:181] (0x29f4070) Data frame received for 3\nI0111 17:34:23.626884 3234 log.go:181] (0x28561c0) (3) Data frame handling\nI0111 17:34:23.628789 3234 log.go:181] (0x29f4070) Data frame received for 1\nI0111 17:34:23.628958 3234 log.go:181] (0x29f40e0) (1) Data frame handling\nI0111 17:34:23.629096 3234 log.go:181] (0x29f40e0) (1) Data frame sent\nI0111 17:34:23.629650 3234 log.go:181] (0x29f4070) (0x29f40e0) Stream removed, broadcasting: 1\nI0111 17:34:23.633586 3234 log.go:181] (0x29f4070) Go away received\nI0111 17:34:23.635841 3234 log.go:181] (0x29f4070) (0x29f40e0) Stream removed, broadcasting: 1\nI0111 17:34:23.636141 3234 log.go:181] (0x29f4070) (0x28561c0) Stream removed, broadcasting: 3\nI0111 17:34:23.636351 3234 log.go:181] (0x29f4070) (0x27444d0) Stream removed, broadcasting: 5\n" Jan 11 17:34:23.645: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:34:23.645: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:34:23.653: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 11 17:34:33.662: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 17:34:33.663: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 17:34:33.691: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:34:33.692: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:34:33.693: INFO: Jan 11 17:34:33.693: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 11 17:34:34.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988173858s Jan 11 17:34:35.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977812139s Jan 11 17:34:36.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.966791538s Jan 11 17:34:37.805: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.885477349s Jan 11 17:34:38.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.875683956s Jan 11 17:34:39.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.864671553s Jan 11 17:34:40.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.8310529s Jan 11 17:34:41.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.819610921s Jan 11 17:34:42.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 808.600489ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-539 Jan 11 17:34:43.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:34:45.395: INFO: stderr: "I0111 17:34:45.283979 3254 log.go:181] (0x2974000) (0x2974070) Create stream\nI0111 17:34:45.285846 3254 log.go:181] (0x2974000) (0x2974070) Stream added, broadcasting: 1\nI0111 17:34:45.292534 3254 log.go:181] (0x2974000) Reply frame received for 1\nI0111 17:34:45.293024 3254 log.go:181] (0x2974000) (0x2f0a070) Create stream\nI0111 17:34:45.293109 3254 log.go:181] (0x2974000) (0x2f0a070) Stream added, broadcasting: 3\nI0111 17:34:45.294335 3254 log.go:181] (0x2974000) Reply frame received for 3\nI0111 17:34:45.294708 3254 log.go:181] (0x2974000) (0x29fc070) Create stream\nI0111 17:34:45.294810 3254 log.go:181] (0x2974000) (0x29fc070) Stream added, broadcasting: 5\nI0111 17:34:45.296183 3254 log.go:181] (0x2974000) Reply frame received for 5\nI0111 17:34:45.375895 3254 log.go:181] (0x2974000) Data frame received for 3\nI0111 17:34:45.376279 3254 log.go:181] (0x2974000) Data frame received for 5\nI0111 17:34:45.376450 3254 log.go:181] (0x29fc070) (5) Data frame handling\nI0111 17:34:45.376569 3254 log.go:181] (0x2974000) Data frame received for 1\nI0111 17:34:45.376732 3254 log.go:181] (0x2974070) (1) Data frame handling\nI0111 17:34:45.376907 3254 log.go:181] (0x2f0a070) (3) Data frame handling\nI0111 17:34:45.377611 3254 log.go:181] (0x29fc070) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0111 17:34:45.378290 3254 log.go:181] (0x2974000) Data frame received for 5\nI0111 17:34:45.378415 3254 log.go:181] (0x29fc070) (5) Data frame handling\nI0111 17:34:45.378680 3254 log.go:181] (0x2f0a070) (3) Data frame sent\nI0111 17:34:45.378845 3254 log.go:181] (0x2974000) Data frame received for 3\nI0111 17:34:45.378969 3254 log.go:181] (0x2f0a070) (3) Data frame handling\nI0111 17:34:45.379854 3254 log.go:181] (0x2974070) (1) Data frame sent\nI0111 17:34:45.382196 3254 log.go:181] (0x2974000) (0x2974070) Stream removed, broadcasting: 1\nI0111 17:34:45.383537 3254 log.go:181] (0x2974000) Go away received\nI0111 17:34:45.386309 3254 log.go:181] (0x2974000) (0x2974070) Stream removed, broadcasting: 1\nI0111 17:34:45.386543 3254 log.go:181] (0x2974000) (0x2f0a070) Stream removed, broadcasting: 3\nI0111 17:34:45.386721 3254 log.go:181] (0x2974000) (0x29fc070) Stream removed, broadcasting: 5\n" Jan 11 17:34:45.396: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 17:34:45.396: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 17:34:45.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:34:46.863: INFO: stderr: "I0111 17:34:46.701470 3274 log.go:181] (0x2e9d6c0) (0x2e9d730) Create stream\nI0111 17:34:46.704077 3274 log.go:181] (0x2e9d6c0) (0x2e9d730) Stream added, broadcasting: 1\nI0111 17:34:46.715471 3274 log.go:181] (0x2e9d6c0) Reply frame received for 1\nI0111 17:34:46.716193 3274 log.go:181] (0x2e9d6c0) (0x2588e70) Create stream\nI0111 17:34:46.716281 3274 log.go:181] (0x2e9d6c0) (0x2588e70) Stream added, broadcasting: 3\nI0111 17:34:46.718079 3274 log.go:181] (0x2e9d6c0) Reply frame received for 3\nI0111 17:34:46.718413 3274 log.go:181] (0x2e9d6c0) (0x2752150) Create stream\nI0111 17:34:46.718597 3274 log.go:181] (0x2e9d6c0) (0x2752150) Stream added, broadcasting: 5\nI0111 17:34:46.720256 3274 log.go:181] (0x2e9d6c0) Reply frame received for 5\nI0111 17:34:46.814928 3274 log.go:181] (0x2e9d6c0) Data frame received for 5\nI0111 17:34:46.817455 3274 log.go:181] (0x2752150) (5) Data frame handling\nI0111 17:34:46.819424 3274 log.go:181] (0x2752150) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0111 17:34:46.828740 3274 log.go:181] (0x2e9d6c0) Data frame received for 5\nI0111 17:34:46.829521 3274 log.go:181] (0x2752150) (5) Data frame handling\nI0111 17:34:46.830346 3274 log.go:181] (0x2e9d6c0) Data frame received for 3\nI0111 17:34:46.830555 3274 log.go:181] (0x2588e70) (3) Data frame handling\nI0111 17:34:46.830769 3274 log.go:181] (0x2588e70) (3) Data frame sent\nI0111 17:34:46.832467 3274 log.go:181] (0x2e9d6c0) Data frame received for 3\nI0111 17:34:46.832605 3274 log.go:181] (0x2588e70) (3) Data frame handling\nI0111 17:34:46.833682 3274 log.go:181] (0x2e9d6c0) Data frame received for 1\nI0111 17:34:46.836126 3274 log.go:181] (0x2e9d730) (1) Data frame handling\nI0111 17:34:46.836267 3274 log.go:181] (0x2e9d730) (1) Data frame sent\nI0111 17:34:46.849062 3274 log.go:181] (0x2e9d6c0) (0x2e9d730) Stream removed, broadcasting: 1\nI0111 17:34:46.850973 3274 log.go:181] (0x2e9d6c0) Go away received\nI0111 17:34:46.854371 3274 log.go:181] (0x2e9d6c0) (0x2e9d730) Stream removed, broadcasting: 1\nI0111 17:34:46.854602 3274 log.go:181] (0x2e9d6c0) (0x2588e70) Stream removed, broadcasting: 3\nI0111 17:34:46.854791 3274 log.go:181] (0x2e9d6c0) (0x2752150) Stream removed, broadcasting: 5\n" Jan 11 17:34:46.864: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 17:34:46.864: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 17:34:46.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:34:48.298: INFO: stderr: "I0111 17:34:48.175734 3295 log.go:181] (0x247caf0) (0x247ce00) Create stream\nI0111 17:34:48.181995 3295 log.go:181] (0x247caf0) (0x247ce00) Stream added, broadcasting: 1\nI0111 17:34:48.194987 3295 log.go:181] (0x247caf0) Reply frame received for 1\nI0111 17:34:48.195606 3295 log.go:181] (0x247caf0) (0x26a4000) Create stream\nI0111 17:34:48.195680 3295 log.go:181] (0x247caf0) (0x26a4000) Stream added, broadcasting: 3\nI0111 17:34:48.197191 3295 log.go:181] (0x247caf0) Reply frame received for 3\nI0111 17:34:48.197466 3295 log.go:181] (0x247caf0) (0x2d2b7a0) Create stream\nI0111 17:34:48.197533 3295 log.go:181] (0x247caf0) (0x2d2b7a0) Stream added, broadcasting: 5\nI0111 17:34:48.198856 3295 log.go:181] (0x247caf0) Reply frame received for 5\nI0111 17:34:48.276074 3295 log.go:181] (0x247caf0) Data frame received for 3\nI0111 17:34:48.276671 3295 log.go:181] (0x26a4000) (3) Data frame handling\nI0111 17:34:48.277368 3295 log.go:181] (0x26a4000) (3) Data frame sent\nI0111 17:34:48.282463 3295 log.go:181] (0x247caf0) Data frame received for 3\nI0111 17:34:48.282586 3295 log.go:181] (0x26a4000) (3) Data frame handling\nI0111 17:34:48.282691 3295 log.go:181] (0x247caf0) Data frame received for 5\nI0111 17:34:48.282781 3295 log.go:181] (0x2d2b7a0) (5) Data frame handling\nI0111 17:34:48.282881 3295 log.go:181] (0x2d2b7a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0111 17:34:48.286730 3295 log.go:181] (0x247caf0) Data frame received for 5\nI0111 17:34:48.286918 3295 log.go:181] (0x2d2b7a0) (5) Data frame handling\nI0111 17:34:48.287047 3295 log.go:181] (0x247caf0) Data frame received for 1\nI0111 17:34:48.287152 3295 log.go:181] (0x247ce00) (1) Data frame handling\nI0111 17:34:48.287239 3295 log.go:181] (0x247ce00) (1) Data frame sent\nI0111 17:34:48.287807 3295 log.go:181] (0x247caf0) (0x247ce00) Stream removed, broadcasting: 1\nI0111 17:34:48.289105 3295 log.go:181] (0x247caf0) Go away received\nI0111 17:34:48.290873 3295 log.go:181] (0x247caf0) (0x247ce00) Stream removed, broadcasting: 1\nI0111 17:34:48.291047 3295 log.go:181] (0x247caf0) (0x26a4000) Stream removed, broadcasting: 3\nI0111 17:34:48.291280 3295 log.go:181] (0x247caf0) (0x2d2b7a0) Stream removed, broadcasting: 5\n" Jan 11 17:34:48.299: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 17:34:48.299: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 17:34:48.307: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:34:48.308: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 17:34:48.308: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 11 17:34:48.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:34:49.787: INFO: stderr: "I0111 17:34:49.677374 3315 log.go:181] (0x2586d20) (0x257e000) Create stream\nI0111 17:34:49.679197 3315 log.go:181] (0x2586d20) (0x257e000) Stream added, broadcasting: 1\nI0111 17:34:49.694927 3315 log.go:181] (0x2586d20) Reply frame received for 1\nI0111 17:34:49.695438 3315 log.go:181] (0x2586d20) (0x28051f0) Create stream\nI0111 17:34:49.695504 3315 log.go:181] (0x2586d20) (0x28051f0) Stream added, broadcasting: 3\nI0111 17:34:49.697052 3315 log.go:181] (0x2586d20) Reply frame received for 3\nI0111 17:34:49.697352 3315 log.go:181] (0x2586d20) (0x257e0e0) Create stream\nI0111 17:34:49.697424 3315 log.go:181] (0x2586d20) (0x257e0e0) Stream added, broadcasting: 5\nI0111 17:34:49.698812 3315 log.go:181] (0x2586d20) Reply frame received for 5\nI0111 17:34:49.770072 3315 log.go:181] (0x2586d20) Data frame received for 3\nI0111 17:34:49.770378 3315 log.go:181] (0x28051f0) (3) Data frame handling\nI0111 17:34:49.770931 3315 log.go:181] (0x2586d20) Data frame received for 5\nI0111 17:34:49.771067 3315 log.go:181] (0x257e0e0) (5) Data frame handling\nI0111 17:34:49.771252 3315 log.go:181] (0x257e0e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:34:49.771479 3315 log.go:181] (0x28051f0) (3) Data frame sent\nI0111 17:34:49.771726 3315 log.go:181] (0x2586d20) Data frame received for 1\nI0111 17:34:49.771836 3315 log.go:181] (0x2586d20) Data frame received for 5\nI0111 17:34:49.771971 3315 log.go:181] (0x257e0e0) (5) Data frame handling\nI0111 17:34:49.772039 3315 log.go:181] (0x257e000) (1) Data frame handling\nI0111 17:34:49.772153 3315 log.go:181] (0x257e000) (1) Data frame sent\nI0111 17:34:49.772374 3315 log.go:181] (0x2586d20) Data frame received for 3\nI0111 17:34:49.772446 3315 log.go:181] (0x28051f0) (3) Data frame handling\nI0111 17:34:49.773790 3315 log.go:181] (0x2586d20) (0x257e000) Stream removed, broadcasting: 1\nI0111 17:34:49.775428 3315 log.go:181] (0x2586d20) Go away received\nI0111 17:34:49.777724 3315 log.go:181] (0x2586d20) (0x257e000) Stream removed, broadcasting: 1\nI0111 17:34:49.777986 3315 log.go:181] (0x2586d20) (0x28051f0) Stream removed, broadcasting: 3\nI0111 17:34:49.778222 3315 log.go:181] (0x2586d20) (0x257e0e0) Stream removed, broadcasting: 5\n" Jan 11 17:34:49.787: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:34:49.788: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:34:49.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:34:51.310: INFO: stderr: "I0111 17:34:51.124681 3335 log.go:181] (0x25e7730) (0x25e77a0) Create stream\nI0111 17:34:51.127347 3335 log.go:181] (0x25e7730) (0x25e77a0) Stream added, broadcasting: 1\nI0111 17:34:51.144255 3335 log.go:181] (0x25e7730) Reply frame received for 1\nI0111 17:34:51.144680 3335 log.go:181] (0x25e7730) (0x3028150) Create stream\nI0111 17:34:51.144744 3335 log.go:181] (0x25e7730) (0x3028150) Stream added, broadcasting: 3\nI0111 17:34:51.145872 3335 log.go:181] (0x25e7730) Reply frame received for 3\nI0111 17:34:51.146082 3335 log.go:181] (0x25e7730) (0x25e6070) Create stream\nI0111 17:34:51.146146 3335 log.go:181] (0x25e7730) (0x25e6070) Stream added, broadcasting: 5\nI0111 17:34:51.146980 3335 log.go:181] (0x25e7730) Reply frame received for 5\nI0111 17:34:51.229492 3335 log.go:181] (0x25e7730) Data frame received for 5\nI0111 17:34:51.229823 3335 log.go:181] (0x25e6070) (5) Data frame handling\nI0111 17:34:51.230511 3335 log.go:181] (0x25e6070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:34:51.293301 3335 log.go:181] (0x25e7730) Data frame received for 5\nI0111 17:34:51.293574 3335 log.go:181] (0x25e6070) (5) Data frame handling\nI0111 17:34:51.293728 3335 log.go:181] (0x25e7730) Data frame received for 3\nI0111 17:34:51.293883 3335 log.go:181] (0x3028150) (3) Data frame handling\nI0111 17:34:51.294079 3335 log.go:181] (0x3028150) (3) Data frame sent\nI0111 17:34:51.294217 3335 log.go:181] (0x25e7730) Data frame received for 3\nI0111 17:34:51.294360 3335 log.go:181] (0x3028150) (3) Data frame handling\nI0111 17:34:51.294827 3335 log.go:181] (0x25e7730) Data frame received for 1\nI0111 17:34:51.294943 3335 log.go:181] (0x25e77a0) (1) Data frame handling\nI0111 17:34:51.295076 3335 log.go:181] (0x25e77a0) (1) Data frame sent\nI0111 17:34:51.295776 3335 log.go:181] (0x25e7730) (0x25e77a0) Stream removed, broadcasting: 1\nI0111 17:34:51.298366 3335 log.go:181] (0x25e7730) Go away received\nI0111 17:34:51.300030 3335 log.go:181] (0x25e7730) (0x25e77a0) Stream removed, broadcasting: 1\nI0111 17:34:51.300387 3335 log.go:181] (0x25e7730) (0x3028150) Stream removed, broadcasting: 3\nI0111 17:34:51.300568 3335 log.go:181] (0x25e7730) (0x25e6070) Stream removed, broadcasting: 5\n" Jan 11 17:34:51.311: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:34:51.311: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:34:51.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 17:34:52.790: INFO: stderr: "I0111 17:34:52.656529 3355 log.go:181] (0x312e000) (0x312e070) Create stream\nI0111 17:34:52.659550 3355 log.go:181] (0x312e000) (0x312e070) Stream added, broadcasting: 1\nI0111 17:34:52.677318 3355 log.go:181] (0x312e000) Reply frame received for 1\nI0111 17:34:52.677832 3355 log.go:181] (0x312e000) (0x2bb4150) Create stream\nI0111 17:34:52.677904 3355 log.go:181] (0x312e000) (0x2bb4150) Stream added, broadcasting: 3\nI0111 17:34:52.679257 3355 log.go:181] (0x312e000) Reply frame received for 3\nI0111 17:34:52.679471 3355 log.go:181] (0x312e000) (0x29f20e0) Create stream\nI0111 17:34:52.679540 3355 log.go:181] (0x312e000) (0x29f20e0) Stream added, broadcasting: 5\nI0111 17:34:52.680513 3355 log.go:181] (0x312e000) Reply frame received for 5\nI0111 17:34:52.747275 3355 log.go:181] (0x312e000) Data frame received for 5\nI0111 17:34:52.747640 3355 log.go:181] (0x29f20e0) (5) Data frame handling\nI0111 17:34:52.748439 3355 log.go:181] (0x29f20e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0111 17:34:52.773585 3355 log.go:181] (0x312e000) Data frame received for 5\nI0111 17:34:52.773748 3355 log.go:181] (0x29f20e0) (5) Data frame handling\nI0111 17:34:52.773884 3355 log.go:181] (0x312e000) Data frame received for 3\nI0111 17:34:52.774056 3355 log.go:181] (0x2bb4150) (3) Data frame handling\nI0111 17:34:52.774317 3355 log.go:181] (0x2bb4150) (3) Data frame sent\nI0111 17:34:52.774532 3355 log.go:181] (0x312e000) Data frame received for 3\nI0111 17:34:52.774808 3355 log.go:181] (0x2bb4150) (3) Data frame handling\nI0111 17:34:52.774954 3355 log.go:181] (0x312e000) Data frame received for 1\nI0111 17:34:52.775117 3355 log.go:181] (0x312e070) (1) Data frame handling\nI0111 17:34:52.775281 3355 log.go:181] (0x312e070) (1) Data frame sent\nI0111 17:34:52.776658 3355 log.go:181] (0x312e000) (0x312e070) Stream removed, broadcasting: 1\nI0111 17:34:52.779214 3355 log.go:181] (0x312e000) Go away received\nI0111 17:34:52.780770 3355 log.go:181] (0x312e000) (0x312e070) Stream removed, broadcasting: 1\nI0111 17:34:52.781322 3355 log.go:181] (0x312e000) (0x2bb4150) Stream removed, broadcasting: 3\nI0111 17:34:52.781469 3355 log.go:181] (0x312e000) (0x29f20e0) Stream removed, broadcasting: 5\n" Jan 11 17:34:52.791: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 17:34:52.791: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 17:34:52.791: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 17:34:52.797: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 11 17:35:02.814: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 17:35:02.814: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 11 17:35:02.814: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 11 17:35:02.885: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:02.885: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:02.886: INFO: ss-1 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:02.886: INFO: ss-2 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:02.886: INFO: Jan 11 17:35:02.886: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:03.898: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:03.898: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:03.899: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:03.899: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:03.900: INFO: Jan 11 17:35:03.900: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:05.131: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:05.131: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:05.132: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:05.133: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:05.133: INFO: Jan 11 17:35:05.133: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:06.144: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:06.144: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:06.145: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:06.145: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:06.146: INFO: Jan 11 17:35:06.146: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:07.159: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:07.159: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:07.159: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:07.159: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:07.160: INFO: Jan 11 17:35:07.160: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:08.172: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:08.172: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:08.172: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:08.173: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:08.173: INFO: Jan 11 17:35:08.173: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:09.185: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:09.186: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:09.186: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:09.186: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:09.187: INFO: Jan 11 17:35:09.187: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:10.198: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:10.198: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:10.199: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:10.199: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:10.200: INFO: Jan 11 17:35:10.200: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:11.211: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:11.212: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:11.212: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:11.212: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:11.213: INFO: Jan 11 17:35:11.213: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 17:35:12.222: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 17:35:12.222: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:12 +0000 UTC }] Jan 11 17:35:12.222: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:12.223: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 17:34:33 +0000 UTC }] Jan 11 17:35:12.223: INFO: Jan 11 17:35:12.223: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-539 Jan 11 17:35:13.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:35:14.543: INFO: rc: 1 Jan 11 17:35:14.543: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 11 17:35:24.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:35:25.899: INFO: rc: 1 Jan 11 17:35:25.900: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 11 17:35:35.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:35:37.201: INFO: rc: 1 Jan 11 17:35:37.201: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:35:47.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:35:48.364: INFO: rc: 1 Jan 11 17:35:48.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:35:58.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:35:59.666: INFO: rc: 1 Jan 11 17:35:59.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:36:09.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:36:10.870: INFO: rc: 1 Jan 11 17:36:10.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:36:20.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:36:22.022: INFO: rc: 1 Jan 11 17:36:22.022: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:36:32.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:36:33.201: INFO: rc: 1 Jan 11 17:36:33.201: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:36:43.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:36:44.370: INFO: rc: 1 Jan 11 17:36:44.370: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:36:54.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:36:55.511: INFO: rc: 1 Jan 11 17:36:55.511: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:37:05.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:37:06.695: INFO: rc: 1 Jan 11 17:37:06.696: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:37:16.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:37:18.050: INFO: rc: 1 Jan 11 17:37:18.050: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:37:28.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:37:29.302: INFO: rc: 1 Jan 11 17:37:29.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:37:39.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:37:40.468: INFO: rc: 1 Jan 11 17:37:40.468: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:37:50.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:37:51.816: INFO: rc: 1 Jan 11 17:37:51.816: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:38:01.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:38:03.031: INFO: rc: 1 Jan 11 17:38:03.031: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:38:13.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:38:14.214: INFO: rc: 1 Jan 11 17:38:14.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:38:24.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:38:25.422: INFO: rc: 1 Jan 11 17:38:25.422: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:38:35.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:38:36.657: INFO: rc: 1 Jan 11 17:38:36.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:38:46.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:38:47.911: INFO: rc: 1 Jan 11 17:38:47.912: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:38:57.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:38:59.075: INFO: rc: 1 Jan 11 17:38:59.076: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:39:09.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:39:10.254: INFO: rc: 1 Jan 11 17:39:10.255: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:39:20.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:39:21.456: INFO: rc: 1 Jan 11 17:39:21.457: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:39:31.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:39:32.612: INFO: rc: 1 Jan 11 17:39:32.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:39:42.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:39:43.787: INFO: rc: 1 Jan 11 17:39:43.787: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:39:53.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:39:55.050: INFO: rc: 1 Jan 11 17:39:55.051: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:40:05.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:40:06.283: INFO: rc: 1 Jan 11 17:40:06.283: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 17:40:16.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-539 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 17:40:17.433: INFO: rc: 1 Jan 11 17:40:17.434: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 11 17:40:17.434: INFO: Scaling statefulset ss to 0 Jan 11 17:40:17.451: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 11 17:40:17.456: INFO: Deleting all statefulset in ns statefulset-539 Jan 11 17:40:17.460: INFO: Scaling statefulset ss to 0 Jan 11 17:40:17.476: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 17:40:17.480: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:40:17.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-539" for this suite. • [SLOW TEST:365.581 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":309,"completed":186,"skipped":3295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:40:17.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-1366 STEP: creating service affinity-clusterip in namespace services-1366 STEP: creating replication controller affinity-clusterip in namespace services-1366 I0111 17:40:17.666421 10 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1366, replica count: 3 I0111 17:40:20.718032 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 17:40:23.719352 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 17:40:23.731: INFO: Creating new exec pod Jan 11 17:40:28.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1366 exec execpod-affinityg5vt5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 11 17:40:30.218: INFO: stderr: "I0111 17:40:30.093811 3939 log.go:181] (0x3002000) (0x3002070) Create stream\nI0111 17:40:30.097201 3939 log.go:181] (0x3002000) (0x3002070) Stream added, broadcasting: 1\nI0111 17:40:30.106518 3939 log.go:181] (0x3002000) Reply frame received for 1\nI0111 17:40:30.106985 3939 log.go:181] (0x3002000) (0x274a230) Create stream\nI0111 17:40:30.107054 3939 log.go:181] (0x3002000) (0x274a230) Stream added, broadcasting: 3\nI0111 17:40:30.109045 3939 log.go:181] (0x3002000) Reply frame received for 3\nI0111 17:40:30.109539 3939 log.go:181] (0x3002000) (0x28ada40) Create stream\nI0111 17:40:30.109656 3939 log.go:181] (0x3002000) (0x28ada40) Stream added, broadcasting: 5\nI0111 17:40:30.111657 3939 log.go:181] (0x3002000) Reply frame received for 5\nI0111 17:40:30.199823 3939 log.go:181] (0x3002000) Data frame received for 5\nI0111 17:40:30.200253 3939 log.go:181] (0x3002000) Data frame received for 3\nI0111 17:40:30.200413 3939 log.go:181] (0x274a230) (3) Data frame handling\nI0111 17:40:30.200561 3939 log.go:181] (0x3002000) Data frame received for 1\nI0111 17:40:30.200684 3939 log.go:181] (0x3002070) (1) Data frame handling\nI0111 17:40:30.200823 3939 log.go:181] (0x28ada40) (5) Data frame handling\nI0111 17:40:30.202312 3939 log.go:181] (0x3002070) (1) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0111 17:40:30.202675 3939 log.go:181] (0x28ada40) (5) Data frame sent\nI0111 17:40:30.204187 3939 log.go:181] (0x3002000) Data frame received for 5\nI0111 17:40:30.204327 3939 log.go:181] (0x28ada40) (5) Data frame handling\nI0111 17:40:30.204456 3939 log.go:181] (0x28ada40) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0111 17:40:30.204548 3939 log.go:181] (0x3002000) Data frame received for 5\nI0111 17:40:30.204620 3939 log.go:181] (0x28ada40) (5) Data frame handling\nI0111 17:40:30.205755 3939 log.go:181] (0x3002000) (0x3002070) Stream removed, broadcasting: 1\nI0111 17:40:30.206167 3939 log.go:181] (0x3002000) Go away received\nI0111 17:40:30.209473 3939 log.go:181] (0x3002000) (0x3002070) Stream removed, broadcasting: 1\nI0111 17:40:30.209676 3939 log.go:181] (0x3002000) (0x274a230) Stream removed, broadcasting: 3\nI0111 17:40:30.209856 3939 log.go:181] (0x3002000) (0x28ada40) Stream removed, broadcasting: 5\n" Jan 11 17:40:30.219: INFO: stdout: "" Jan 11 17:40:30.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1366 exec execpod-affinityg5vt5 -- /bin/sh -x -c nc -zv -t -w 2 10.96.132.101 80' Jan 11 17:40:31.698: INFO: stderr: "I0111 17:40:31.571906 3959 log.go:181] (0x2696000) (0x2696150) Create stream\nI0111 17:40:31.574860 3959 log.go:181] (0x2696000) (0x2696150) Stream added, broadcasting: 1\nI0111 17:40:31.593494 3959 log.go:181] (0x2696000) Reply frame received for 1\nI0111 17:40:31.594025 3959 log.go:181] (0x2696000) (0x28600e0) Create stream\nI0111 17:40:31.594101 3959 log.go:181] (0x2696000) (0x28600e0) Stream added, broadcasting: 3\nI0111 17:40:31.595282 3959 log.go:181] (0x2696000) Reply frame received for 3\nI0111 17:40:31.595506 3959 log.go:181] (0x2696000) (0x2f10150) Create stream\nI0111 17:40:31.595574 3959 log.go:181] (0x2696000) (0x2f10150) Stream added, broadcasting: 5\nI0111 17:40:31.597153 3959 log.go:181] (0x2696000) Reply frame received for 5\nI0111 17:40:31.681295 3959 log.go:181] (0x2696000) Data frame received for 5\nI0111 17:40:31.681578 3959 log.go:181] (0x2696000) Data frame received for 3\nI0111 17:40:31.681787 3959 log.go:181] (0x28600e0) (3) Data frame handling\nI0111 17:40:31.682051 3959 log.go:181] (0x2f10150) (5) Data frame handling\nI0111 17:40:31.682468 3959 log.go:181] (0x2696000) Data frame received for 1\nI0111 17:40:31.682567 3959 log.go:181] (0x2696150) (1) Data frame handling\n+ nc -zv -t -w 2 10.96.132.101 80\nConnection to 10.96.132.101 80 port [tcp/http] succeeded!\nI0111 17:40:31.683799 3959 log.go:181] (0x2696150) (1) Data frame sent\nI0111 17:40:31.683971 3959 log.go:181] (0x2f10150) (5) Data frame sent\nI0111 17:40:31.684145 3959 log.go:181] (0x2696000) Data frame received for 5\nI0111 17:40:31.684258 3959 log.go:181] (0x2f10150) (5) Data frame handling\nI0111 17:40:31.684913 3959 log.go:181] (0x2696000) (0x2696150) Stream removed, broadcasting: 1\nI0111 17:40:31.687271 3959 log.go:181] (0x2696000) Go away received\nI0111 17:40:31.690252 3959 log.go:181] (0x2696000) (0x2696150) Stream removed, broadcasting: 1\nI0111 17:40:31.690450 3959 log.go:181] (0x2696000) (0x28600e0) Stream removed, broadcasting: 3\nI0111 17:40:31.690626 3959 log.go:181] (0x2696000) (0x2f10150) Stream removed, broadcasting: 5\n" Jan 11 17:40:31.699: INFO: stdout: "" Jan 11 17:40:31.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1366 exec execpod-affinityg5vt5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.132.101:80/ ; done' Jan 11 17:40:33.211: INFO: stderr: "I0111 17:40:33.000817 3979 log.go:181] (0x2b2e000) (0x2b2e070) Create stream\nI0111 17:40:33.002633 3979 log.go:181] (0x2b2e000) (0x2b2e070) Stream added, broadcasting: 1\nI0111 17:40:33.011396 3979 log.go:181] (0x2b2e000) Reply frame received for 1\nI0111 17:40:33.012288 3979 log.go:181] (0x2b2e000) (0x247ca10) Create stream\nI0111 17:40:33.012445 3979 log.go:181] (0x2b2e000) (0x247ca10) Stream added, broadcasting: 3\nI0111 17:40:33.014485 3979 log.go:181] (0x2b2e000) Reply frame received for 3\nI0111 17:40:33.014848 3979 log.go:181] (0x2b2e000) (0x247db90) Create stream\nI0111 17:40:33.014934 3979 log.go:181] (0x2b2e000) (0x247db90) Stream added, broadcasting: 5\nI0111 17:40:33.016205 3979 log.go:181] (0x2b2e000) Reply frame received for 5\nI0111 17:40:33.099841 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.100121 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.100364 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.100546 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.100739 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.101049 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.104937 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.105032 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.105126 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.105696 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.105813 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.105933 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.106037 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.106133 3979 log.go:181] (0x247db90) (5) Data frame handling\nI0111 17:40:33.106252 3979 log.go:181] (0x247db90) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.126105 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.126305 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.126431 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.126554 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.126708 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.126933 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.127051 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.127150 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.127222 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.127357 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.127596 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.127758 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.127894 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.127992 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.128144 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.128219 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.128347 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.128519 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.128590 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.128678 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.128774 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.128909 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.129004 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.129078 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.129164 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.129226 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.129314 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.131923 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.132009 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.132143 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.132355 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.132472 3979 log.go:181] (0x247db90) (5) Data frame handling\nI0111 17:40:33.132570 3979 log.go:181] (0x247db90) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.132658 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.132714 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.132785 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.135867 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.135949 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.136017 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.136434 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.136510 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.136577 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.136658 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.136730 3979 log.go:181] (0x247db90) (5) Data frame handling\nI0111 17:40:33.136799 3979 log.go:181] (0x247db90) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.140380 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.140465 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.140529 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.140585 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.140647 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.140727 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.140919 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.141048 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.141133 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.145623 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.145712 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.145797 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.146005 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.146126 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.146238 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.146356 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.146468 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.146654 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.149817 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.149896 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.149982 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.150622 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.150741 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/I0111 17:40:33.150828 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.150902 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.150967 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.151062 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.151125 3979 log.go:181] (0x247db90) (5) Data frame handling\n\nI0111 17:40:33.151208 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.151273 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.155516 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.155604 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.155714 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.155818 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.155936 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.156006 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.156086 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.156156 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.156273 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.159723 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.159860 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.160005 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.160291 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.160393 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.160495 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.160573 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.160641 3979 log.go:181] (0x247db90) (5) Data frame handling\nI0111 17:40:33.160710 3979 log.go:181] (0x247db90) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.165858 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.165968 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.166106 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.166700 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.166830 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.166994 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.167121 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.167214 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.167308 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.172028 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.172104 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.172188 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.172632 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.172765 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.172925 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.173040 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.173140 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.173233 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.178063 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.178238 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.178418 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.178605 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.178764 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.178962 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.179148 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.179294 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.179443 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.184543 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.184671 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.184794 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.185379 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.185570 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.185707 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.185902 3979 log.go:181] (0x247db90) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.132.101:80/\nI0111 17:40:33.186122 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.186302 3979 log.go:181] (0x247db90) (5) Data frame sent\nI0111 17:40:33.191128 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.191220 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.191319 3979 log.go:181] (0x247ca10) (3) Data frame sent\nI0111 17:40:33.192260 3979 log.go:181] (0x2b2e000) Data frame received for 3\nI0111 17:40:33.192378 3979 log.go:181] (0x247ca10) (3) Data frame handling\nI0111 17:40:33.192520 3979 log.go:181] (0x2b2e000) Data frame received for 5\nI0111 17:40:33.192688 3979 log.go:181] (0x247db90) (5) Data frame handling\nI0111 17:40:33.197171 3979 log.go:181] (0x2b2e000) Data frame received for 1\nI0111 17:40:33.197296 3979 log.go:181] (0x2b2e070) (1) Data frame handling\nI0111 17:40:33.197405 3979 log.go:181] (0x2b2e070) (1) Data frame sent\nI0111 17:40:33.198637 3979 log.go:181] (0x2b2e000) (0x2b2e070) Stream removed, broadcasting: 1\nI0111 17:40:33.199435 3979 log.go:181] (0x2b2e000) Go away received\nI0111 17:40:33.202846 3979 log.go:181] (0x2b2e000) (0x2b2e070) Stream removed, broadcasting: 1\nI0111 17:40:33.203068 3979 log.go:181] (0x2b2e000) (0x247ca10) Stream removed, broadcasting: 3\nI0111 17:40:33.203251 3979 log.go:181] (0x2b2e000) (0x247db90) Stream removed, broadcasting: 5\n" Jan 11 17:40:33.216: INFO: stdout: "\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz\naffinity-clusterip-5cgtz" Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.217: INFO: Received response from host: affinity-clusterip-5cgtz Jan 11 17:40:33.218: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1366, will wait for the garbage collector to delete the pods Jan 11 17:40:33.414: INFO: Deleting ReplicationController affinity-clusterip took: 80.632904ms Jan 11 17:40:33.715: INFO: Terminating ReplicationController affinity-clusterip pods took: 301.20303ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:41:39.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1366" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:82.384 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":187,"skipped":3322,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:41:39.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6195 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6195 STEP: creating replication controller externalsvc in namespace services-6195 I0111 17:41:40.139567 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6195, replica count: 2 I0111 17:41:43.191053 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 17:41:46.191963 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 11 17:41:46.249: INFO: Creating new exec pod Jan 11 17:41:50.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-6195 exec execpodnjg96 -- /bin/sh -x -c nslookup nodeport-service.services-6195.svc.cluster.local' Jan 11 17:41:51.782: INFO: stderr: "I0111 17:41:51.643056 3999 log.go:181] (0x27ea3f0) (0x27ea4d0) Create stream\nI0111 17:41:51.645883 3999 log.go:181] (0x27ea3f0) (0x27ea4d0) Stream added, broadcasting: 1\nI0111 17:41:51.664349 3999 log.go:181] (0x27ea3f0) Reply frame received for 1\nI0111 17:41:51.664900 3999 log.go:181] (0x27ea3f0) (0x27520e0) Create stream\nI0111 17:41:51.664971 3999 log.go:181] (0x27ea3f0) (0x27520e0) Stream added, broadcasting: 3\nI0111 17:41:51.666149 3999 log.go:181] (0x27ea3f0) Reply frame received for 3\nI0111 17:41:51.666355 3999 log.go:181] (0x27ea3f0) (0x27522a0) Create stream\nI0111 17:41:51.666411 3999 log.go:181] (0x27ea3f0) (0x27522a0) Stream added, broadcasting: 5\nI0111 17:41:51.667515 3999 log.go:181] (0x27ea3f0) Reply frame received for 5\nI0111 17:41:51.754373 3999 log.go:181] (0x27ea3f0) Data frame received for 5\nI0111 17:41:51.754928 3999 log.go:181] (0x27522a0) (5) Data frame handling\nI0111 17:41:51.755872 3999 log.go:181] (0x27522a0) (5) Data frame sent\n+ nslookup nodeport-service.services-6195.svc.cluster.local\nI0111 17:41:51.764765 3999 log.go:181] (0x27ea3f0) Data frame received for 3\nI0111 17:41:51.764995 3999 log.go:181] (0x27520e0) (3) Data frame handling\nI0111 17:41:51.765155 3999 log.go:181] (0x27520e0) (3) Data frame sent\nI0111 17:41:51.765317 3999 log.go:181] (0x27ea3f0) Data frame received for 3\nI0111 17:41:51.765401 3999 log.go:181] (0x27520e0) (3) Data frame handling\nI0111 17:41:51.765521 3999 log.go:181] (0x27520e0) (3) Data frame sent\nI0111 17:41:51.765622 3999 log.go:181] (0x27ea3f0) Data frame received for 3\nI0111 17:41:51.765717 3999 log.go:181] (0x27520e0) (3) Data frame handling\nI0111 17:41:51.765893 3999 log.go:181] (0x27ea3f0) Data frame received for 5\nI0111 17:41:51.766028 3999 log.go:181] (0x27522a0) (5) Data frame handling\nI0111 17:41:51.767356 3999 log.go:181] (0x27ea3f0) Data frame received for 1\nI0111 17:41:51.767437 3999 log.go:181] (0x27ea4d0) (1) Data frame handling\nI0111 17:41:51.767527 3999 log.go:181] (0x27ea4d0) (1) Data frame sent\nI0111 17:41:51.767810 3999 log.go:181] (0x27ea3f0) (0x27ea4d0) Stream removed, broadcasting: 1\nI0111 17:41:51.770519 3999 log.go:181] (0x27ea3f0) Go away received\nI0111 17:41:51.773189 3999 log.go:181] (0x27ea3f0) (0x27ea4d0) Stream removed, broadcasting: 1\nI0111 17:41:51.773378 3999 log.go:181] (0x27ea3f0) (0x27520e0) Stream removed, broadcasting: 3\nI0111 17:41:51.773531 3999 log.go:181] (0x27ea3f0) (0x27522a0) Stream removed, broadcasting: 5\n" Jan 11 17:41:51.783: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6195.svc.cluster.local\tcanonical name = externalsvc.services-6195.svc.cluster.local.\nName:\texternalsvc.services-6195.svc.cluster.local\nAddress: 10.96.65.209\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6195, will wait for the garbage collector to delete the pods Jan 11 17:41:51.850: INFO: Deleting ReplicationController externalsvc took: 9.082853ms Jan 11 17:41:52.451: INFO: Terminating ReplicationController externalsvc pods took: 600.922238ms Jan 11 17:42:00.187: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:42:00.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6195" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:20.427 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":309,"completed":188,"skipped":3336,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:42:00.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-5xhm STEP: Creating a pod to test atomic-volume-subpath Jan 11 17:42:00.502: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5xhm" in namespace "subpath-9502" to be "Succeeded or Failed" Jan 11 17:42:00.510: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.98349ms Jan 11 17:42:02.538: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035621539s Jan 11 17:42:04.566: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 4.063716611s Jan 11 17:42:06.588: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 6.085364028s Jan 11 17:42:08.652: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 8.149325367s Jan 11 17:42:10.659: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 10.156841117s Jan 11 17:42:12.675: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 12.17242754s Jan 11 17:42:14.682: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 14.179594176s Jan 11 17:42:16.688: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 16.185492935s Jan 11 17:42:18.705: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 18.202615164s Jan 11 17:42:20.712: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 20.209824517s Jan 11 17:42:22.783: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Running", Reason="", readiness=true. Elapsed: 22.280208277s Jan 11 17:42:24.798: INFO: Pod "pod-subpath-test-configmap-5xhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.294956498s STEP: Saw pod success Jan 11 17:42:24.798: INFO: Pod "pod-subpath-test-configmap-5xhm" satisfied condition "Succeeded or Failed" Jan 11 17:42:24.806: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-configmap-5xhm container test-container-subpath-configmap-5xhm: STEP: delete the pod Jan 11 17:42:24.885: INFO: Waiting for pod pod-subpath-test-configmap-5xhm to disappear Jan 11 17:42:24.891: INFO: Pod pod-subpath-test-configmap-5xhm no longer exists STEP: Deleting pod pod-subpath-test-configmap-5xhm Jan 11 17:42:24.891: INFO: Deleting pod "pod-subpath-test-configmap-5xhm" in namespace "subpath-9502" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:42:24.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9502" for this suite. • [SLOW TEST:24.578 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":309,"completed":189,"skipped":3340,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:42:24.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-337ec25e-e156-45dc-90d6-a50a215b0059 STEP: Creating a pod to test consume configMaps Jan 11 17:42:25.037: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c222a518-856f-44b0-b5b3-c279456023a9" in namespace "projected-7766" to be "Succeeded or Failed" Jan 11 17:42:25.060: INFO: Pod "pod-projected-configmaps-c222a518-856f-44b0-b5b3-c279456023a9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.194952ms Jan 11 17:42:27.071: INFO: Pod "pod-projected-configmaps-c222a518-856f-44b0-b5b3-c279456023a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034076555s Jan 11 17:42:29.080: INFO: Pod "pod-projected-configmaps-c222a518-856f-44b0-b5b3-c279456023a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042614838s STEP: Saw pod success Jan 11 17:42:29.080: INFO: Pod "pod-projected-configmaps-c222a518-856f-44b0-b5b3-c279456023a9" satisfied condition "Succeeded or Failed" Jan 11 17:42:29.087: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-c222a518-856f-44b0-b5b3-c279456023a9 container projected-configmap-volume-test: STEP: delete the pod Jan 11 17:42:29.136: INFO: Waiting for pod pod-projected-configmaps-c222a518-856f-44b0-b5b3-c279456023a9 to disappear Jan 11 17:42:29.145: INFO: Pod pod-projected-configmaps-c222a518-856f-44b0-b5b3-c279456023a9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:42:29.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7766" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":309,"completed":190,"skipped":3356,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:42:29.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:42:29.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8684" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":309,"completed":191,"skipped":3375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:42:29.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 11 17:42:29.626: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 17:43:29.721: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:43:29.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jan 11 17:43:33.891: INFO: found a healthy node: leguer-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:43:54.081: INFO: pods created so far: [1 1 1] Jan 11 17:43:54.082: INFO: length of pods created so far: 3 Jan 11 17:44:36.107: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:44:43.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7538" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:44:43.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8644" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:133.860 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":309,"completed":192,"skipped":3403,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:44:43.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's command Jan 11 17:44:43.511: INFO: Waiting up to 5m0s for pod "var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa" in namespace "var-expansion-2645" to be "Succeeded or Failed" Jan 11 17:44:43.534: INFO: Pod "var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa": Phase="Pending", Reason="", readiness=false. Elapsed: 23.599005ms Jan 11 17:44:45.543: INFO: Pod "var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03198433s Jan 11 17:44:47.636: INFO: Pod "var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124843406s Jan 11 17:44:49.641: INFO: Pod "var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13010308s STEP: Saw pod success Jan 11 17:44:49.641: INFO: Pod "var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa" satisfied condition "Succeeded or Failed" Jan 11 17:44:49.644: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa container dapi-container: STEP: delete the pod Jan 11 17:44:49.948: INFO: Waiting for pod var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa to disappear Jan 11 17:44:50.037: INFO: Pod var-expansion-1ac21b5f-1df0-4993-bd5c-ac83956f79aa no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:44:50.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2645" for this suite. • [SLOW TEST:6.710 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":309,"completed":193,"skipped":3407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:44:50.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pod templates Jan 11 17:44:52.013: INFO: created test-podtemplate-1 Jan 11 17:44:52.054: INFO: created test-podtemplate-2 Jan 11 17:44:52.353: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Jan 11 17:44:52.401: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Jan 11 17:44:52.530: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:44:52.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-2894" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":309,"completed":194,"skipped":3434,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:44:52.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 11 17:44:52.662: INFO: Waiting up to 5m0s for pod "pod-5304260b-522d-472b-bee9-3ed3144fd9b7" in namespace "emptydir-1556" to be "Succeeded or Failed" Jan 11 17:44:52.688: INFO: Pod "pod-5304260b-522d-472b-bee9-3ed3144fd9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.571629ms Jan 11 17:44:54.696: INFO: Pod "pod-5304260b-522d-472b-bee9-3ed3144fd9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0336616s Jan 11 17:44:56.702: INFO: Pod "pod-5304260b-522d-472b-bee9-3ed3144fd9b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040145489s STEP: Saw pod success Jan 11 17:44:56.703: INFO: Pod "pod-5304260b-522d-472b-bee9-3ed3144fd9b7" satisfied condition "Succeeded or Failed" Jan 11 17:44:56.742: INFO: Trying to get logs from node leguer-worker2 pod pod-5304260b-522d-472b-bee9-3ed3144fd9b7 container test-container: STEP: delete the pod Jan 11 17:44:56.763: INFO: Waiting for pod pod-5304260b-522d-472b-bee9-3ed3144fd9b7 to disappear Jan 11 17:44:56.770: INFO: Pod pod-5304260b-522d-472b-bee9-3ed3144fd9b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:44:56.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1556" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":195,"skipped":3449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:44:56.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-07b0789a-1b15-429c-8732-753395f46f4c STEP: Creating a pod to test consume configMaps Jan 11 17:44:56.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a42fb06-c2ad-4e37-82d4-c3534f03ee31" in namespace "configmap-9013" to be "Succeeded or Failed" Jan 11 17:44:56.913: INFO: Pod "pod-configmaps-4a42fb06-c2ad-4e37-82d4-c3534f03ee31": Phase="Pending", Reason="", readiness=false. Elapsed: 17.996652ms Jan 11 17:44:58.922: INFO: Pod "pod-configmaps-4a42fb06-c2ad-4e37-82d4-c3534f03ee31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026629665s Jan 11 17:45:00.931: INFO: Pod "pod-configmaps-4a42fb06-c2ad-4e37-82d4-c3534f03ee31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035651152s STEP: Saw pod success Jan 11 17:45:00.931: INFO: Pod "pod-configmaps-4a42fb06-c2ad-4e37-82d4-c3534f03ee31" satisfied condition "Succeeded or Failed" Jan 11 17:45:00.937: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-4a42fb06-c2ad-4e37-82d4-c3534f03ee31 container agnhost-container: STEP: delete the pod Jan 11 17:45:00.977: INFO: Waiting for pod pod-configmaps-4a42fb06-c2ad-4e37-82d4-c3534f03ee31 to disappear Jan 11 17:45:00.990: INFO: Pod pod-configmaps-4a42fb06-c2ad-4e37-82d4-c3534f03ee31 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:45:00.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9013" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":309,"completed":196,"skipped":3505,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:45:01.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-8ab1c21f-1311-48be-a7ac-96f45f69cfb1 in namespace container-probe-8672 Jan 11 17:45:05.148: INFO: Started pod busybox-8ab1c21f-1311-48be-a7ac-96f45f69cfb1 in namespace container-probe-8672 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 17:45:05.152: INFO: Initial restart count of pod busybox-8ab1c21f-1311-48be-a7ac-96f45f69cfb1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:49:06.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8672" for this suite. • [SLOW TEST:245.458 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":309,"completed":197,"skipped":3506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:49:06.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 17:49:15.874: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 11 17:49:17.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984155, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984155, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984156, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984155, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 17:49:20.931: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:49:20.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6390-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:49:22.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4059" for this suite. STEP: Destroying namespace "webhook-4059-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.880 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":309,"completed":198,"skipped":3540,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:49:22.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jan 11 17:49:26.518: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-5018 PodName:var-expansion-27ebbed4-496f-4f80-aa41-65e9aa7afa5a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:49:26.518: INFO: >>> kubeConfig: /root/.kube/config I0111 17:49:26.634345 10 log.go:181] (0xa99e620) (0xa99e7e0) Create stream I0111 17:49:26.634514 10 log.go:181] (0xa99e620) (0xa99e7e0) Stream added, broadcasting: 1 I0111 17:49:26.642315 10 log.go:181] (0xa99e620) Reply frame received for 1 I0111 17:49:26.642592 10 log.go:181] (0xa99e620) (0xa99ec40) Create stream I0111 17:49:26.642691 10 log.go:181] (0xa99e620) (0xa99ec40) Stream added, broadcasting: 3 I0111 17:49:26.644352 10 log.go:181] (0xa99e620) Reply frame received for 3 I0111 17:49:26.644539 10 log.go:181] (0xa99e620) (0xa99f0a0) Create stream I0111 17:49:26.644643 10 log.go:181] (0xa99e620) (0xa99f0a0) Stream added, broadcasting: 5 I0111 17:49:26.646610 10 log.go:181] (0xa99e620) Reply frame received for 5 I0111 17:49:26.727299 10 log.go:181] (0xa99e620) Data frame received for 3 I0111 17:49:26.727675 10 log.go:181] (0xa99ec40) (3) Data frame handling I0111 17:49:26.728009 10 log.go:181] (0xa99e620) Data frame received for 5 I0111 17:49:26.728441 10 log.go:181] (0xa99f0a0) (5) Data frame handling I0111 17:49:26.729047 10 log.go:181] (0xa99e620) Data frame received for 1 I0111 17:49:26.729370 10 log.go:181] (0xa99e7e0) (1) Data frame handling I0111 17:49:26.729668 10 log.go:181] (0xa99e7e0) (1) Data frame sent I0111 17:49:26.729940 10 log.go:181] (0xa99e620) (0xa99e7e0) Stream removed, broadcasting: 1 I0111 17:49:26.730268 10 log.go:181] (0xa99e620) Go away received I0111 17:49:26.730742 10 log.go:181] (0xa99e620) (0xa99e7e0) Stream removed, broadcasting: 1 I0111 17:49:26.730896 10 log.go:181] (0xa99e620) (0xa99ec40) Stream removed, broadcasting: 3 I0111 17:49:26.731046 10 log.go:181] (0xa99e620) (0xa99f0a0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Jan 11 17:49:26.737: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-5018 PodName:var-expansion-27ebbed4-496f-4f80-aa41-65e9aa7afa5a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 17:49:26.737: INFO: >>> kubeConfig: /root/.kube/config I0111 17:49:26.846071 10 log.go:181] (0x65e04d0) (0x65e05b0) Create stream I0111 17:49:26.846224 10 log.go:181] (0x65e04d0) (0x65e05b0) Stream added, broadcasting: 1 I0111 17:49:26.851302 10 log.go:181] (0x65e04d0) Reply frame received for 1 I0111 17:49:26.851642 10 log.go:181] (0x65e04d0) (0x65e0930) Create stream I0111 17:49:26.851832 10 log.go:181] (0x65e04d0) (0x65e0930) Stream added, broadcasting: 3 I0111 17:49:26.854322 10 log.go:181] (0x65e04d0) Reply frame received for 3 I0111 17:49:26.854465 10 log.go:181] (0x65e04d0) (0x65e0af0) Create stream I0111 17:49:26.854536 10 log.go:181] (0x65e04d0) (0x65e0af0) Stream added, broadcasting: 5 I0111 17:49:26.855830 10 log.go:181] (0x65e04d0) Reply frame received for 5 I0111 17:49:26.900264 10 log.go:181] (0x65e04d0) Data frame received for 5 I0111 17:49:26.900492 10 log.go:181] (0x65e0af0) (5) Data frame handling I0111 17:49:26.900664 10 log.go:181] (0x65e04d0) Data frame received for 3 I0111 17:49:26.900822 10 log.go:181] (0x65e0930) (3) Data frame handling I0111 17:49:26.901928 10 log.go:181] (0x65e04d0) Data frame received for 1 I0111 17:49:26.902074 10 log.go:181] (0x65e05b0) (1) Data frame handling I0111 17:49:26.902283 10 log.go:181] (0x65e05b0) (1) Data frame sent I0111 17:49:26.902455 10 log.go:181] (0x65e04d0) (0x65e05b0) Stream removed, broadcasting: 1 I0111 17:49:26.902690 10 log.go:181] (0x65e04d0) Go away received I0111 17:49:26.903099 10 log.go:181] (0x65e04d0) (0x65e05b0) Stream removed, broadcasting: 1 I0111 17:49:26.903257 10 log.go:181] (0x65e04d0) (0x65e0930) Stream removed, broadcasting: 3 I0111 17:49:26.903400 10 log.go:181] (0x65e04d0) (0x65e0af0) Stream removed, broadcasting: 5 STEP: updating the annotation value Jan 11 17:49:27.432: INFO: Successfully updated pod "var-expansion-27ebbed4-496f-4f80-aa41-65e9aa7afa5a" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jan 11 17:49:27.449: INFO: Deleting pod "var-expansion-27ebbed4-496f-4f80-aa41-65e9aa7afa5a" in namespace "var-expansion-5018" Jan 11 17:49:27.469: INFO: Wait up to 5m0s for pod "var-expansion-27ebbed4-496f-4f80-aa41-65e9aa7afa5a" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:50:11.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5018" for this suite. • [SLOW TEST:49.214 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":309,"completed":199,"skipped":3545,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:50:11.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:50:11.632: INFO: Creating deployment "webserver-deployment" Jan 11 17:50:11.641: INFO: Waiting for observed generation 1 Jan 11 17:50:13.906: INFO: Waiting for all required pods to come up Jan 11 17:50:14.124: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 11 17:50:26.140: INFO: Waiting for deployment "webserver-deployment" to complete Jan 11 17:50:26.149: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 11 17:50:26.160: INFO: Updating deployment webserver-deployment Jan 11 17:50:26.160: INFO: Waiting for observed generation 2 Jan 11 17:50:28.415: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 11 17:50:28.624: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 11 17:50:28.628: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 11 17:50:28.640: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 11 17:50:28.640: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 11 17:50:28.643: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 11 17:50:28.650: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 11 17:50:28.650: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 11 17:50:28.661: INFO: Updating deployment webserver-deployment Jan 11 17:50:28.661: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 11 17:50:28.853: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 11 17:50:28.979: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 17:50:31.211: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8364 c6c5e74e-9ef3-4cee-967f-033cb63a3ada 209318 3 2021-01-11 17:50:11 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb818bf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-11 17:50:28 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-01-11 17:50:29 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 11 17:50:31.455: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-8364 54ba47fa-213f-4684-8c80-368bd4b8c381 209308 3 2021-01-11 17:50:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c6c5e74e-9ef3-4cee-967f-033cb63a3ada 0xb818f47 0xb818f48}] [] [{kube-controller-manager Update apps/v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c5e74e-9ef3-4cee-967f-033cb63a3ada\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb818fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 17:50:31.455: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 11 17:50:31.456: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-8364 edb9c823-5d96-4180-a9cf-07bcd32f372e 209292 3 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c6c5e74e-9ef3-4cee-967f-033cb63a3ada 0xb819027 0xb819028}] [] [{kube-controller-manager Update apps/v1 2021-01-11 17:50:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c5e74e-9ef3-4cee-967f-033cb63a3ada\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb819098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 11 17:50:31.504: INFO: Pod "webserver-deployment-795d758f88-25mcv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-25mcv webserver-deployment-795d758f88- deployment-8364 45f7c4de-ccf2-42b8-aacb-3956ccb2d8aa 209371 0 2021-01-11 17:50:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb8194a7 0xb8194a8}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.96,StartTime:2021-01-11 17:50:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.505: INFO: Pod "webserver-deployment-795d758f88-4jb6z" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4jb6z webserver-deployment-795d758f88- deployment-8364 b1a6ba31-2ba3-48b2-8941-c12a1548beeb 209210 0 2021-01-11 17:50:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb819680 0xb819681}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.507: INFO: Pod "webserver-deployment-795d758f88-4wbvb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4wbvb webserver-deployment-795d758f88- deployment-8364 9039bba3-1f6d-4109-aa23-c6acce536e67 209309 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb819820 0xb819821}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.508: INFO: Pod "webserver-deployment-795d758f88-bnw5r" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bnw5r webserver-deployment-795d758f88- deployment-8364 707298ef-3c79-46e2-b6ba-85f853cbdfb9 209360 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb8199c0 0xb8199c1}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.510: INFO: Pod "webserver-deployment-795d758f88-dw6q7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dw6q7 webserver-deployment-795d758f88- deployment-8364 f0ff3e33-a8c0-4179-af33-a43a0441c375 209365 0 2021-01-11 17:50:29 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb819b60 0xb819b61}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.511: INFO: Pod "webserver-deployment-795d758f88-l2gbv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-l2gbv webserver-deployment-795d758f88- deployment-8364 83698df1-5ddd-42de-aa6d-2f9d316a0e71 209334 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb819d00 0xb819d01}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.513: INFO: Pod "webserver-deployment-795d758f88-pqkkw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pqkkw webserver-deployment-795d758f88- deployment-8364 e3dfea74-9288-45d2-8528-c4d5d4b944fd 209372 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb819ea0 0xb819ea1}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.514: INFO: Pod "webserver-deployment-795d758f88-qj5gq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qj5gq webserver-deployment-795d758f88- deployment-8364 110a3e4c-8a76-4a17-8b62-62f30fe2f48f 209370 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb7ee040 0xb7ee041}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.515: INFO: Pod "webserver-deployment-795d758f88-sldql" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sldql webserver-deployment-795d758f88- deployment-8364 907a58a7-512a-4756-9661-33724090f197 209234 0 2021-01-11 17:50:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb7ee1e0 0xb7ee1e1}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.517: INFO: Pod "webserver-deployment-795d758f88-vvcsl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vvcsl webserver-deployment-795d758f88- deployment-8364 a4d01d9c-41f3-4fd1-ac48-33b02db0f36a 209324 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb7ee410 0xb7ee411}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.518: INFO: Pod "webserver-deployment-795d758f88-vw7nx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vw7nx webserver-deployment-795d758f88- deployment-8364 3bde5f86-1522-4194-9034-5167a9e74d7f 209364 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb7ee5b0 0xb7ee5b1}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.519: INFO: Pod "webserver-deployment-795d758f88-w7pxp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-w7pxp webserver-deployment-795d758f88- deployment-8364 84b440af-04ec-403f-901d-b238b7b8f1be 209232 0 2021-01-11 17:50:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb7ee750 0xb7ee751}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.521: INFO: Pod "webserver-deployment-795d758f88-x4jrp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-x4jrp webserver-deployment-795d758f88- deployment-8364 5ae134b6-9107-41e5-9336-7265dac5ce68 209221 0 2021-01-11 17:50:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 54ba47fa-213f-4684-8c80-368bd4b8c381 0xb7ee910 0xb7ee911}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54ba47fa-213f-4684-8c80-368bd4b8c381\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.522: INFO: Pod "webserver-deployment-dd94f59b7-58gqm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-58gqm webserver-deployment-dd94f59b7- deployment-8364 bc2bccca-d4c7-45e3-ac79-76c9f0700669 209175 0 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7eeac0 0xb7eeac1}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.83,StartTime:2021-01-11 17:50:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 17:50:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c1074f81bce2e87a99695b6444b37388e020072ed556762627a662aec456a7ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.524: INFO: Pod "webserver-deployment-dd94f59b7-6zf5v" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6zf5v webserver-deployment-dd94f59b7- deployment-8364 d8c296dc-b134-4768-8569-dc5272ceb344 209296 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7eec67 0xb7eec68}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.525: INFO: Pod "webserver-deployment-dd94f59b7-7hlt8" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7hlt8 webserver-deployment-dd94f59b7- deployment-8364 5f1bdfa7-dd5b-4a59-a4b6-0840caaa370f 209306 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7eedf7 0xb7eedf8}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.526: INFO: Pod "webserver-deployment-dd94f59b7-8ssn5" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8ssn5 webserver-deployment-dd94f59b7- deployment-8364 e808aa7c-2338-47bb-89ba-0eae5efeed00 209172 0 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7eef87 0xb7eef88}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.82,StartTime:2021-01-11 17:50:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 17:50:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://427b68c88b3ae316b752be08bfcd5703ffd0464a2016ae950ba6750ea55610d1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.528: INFO: Pod "webserver-deployment-dd94f59b7-b6xr8" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-b6xr8 webserver-deployment-dd94f59b7- deployment-8364 d4d68ff4-9f8e-48c7-9d70-d26b446b79f6 209169 0 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7ef137 0xb7ef138}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.84,StartTime:2021-01-11 17:50:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 17:50:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://393b9da82f328525c69471b1fb4941622c4c87a3798f434db55f79a9d95ef56d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.529: INFO: Pod "webserver-deployment-dd94f59b7-bb56h" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bb56h webserver-deployment-dd94f59b7- deployment-8364 029ba9e4-3c50-4b6f-b6ae-e079052e8650 209317 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7ef2e7 0xb7ef2e8}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.531: INFO: Pod "webserver-deployment-dd94f59b7-ck4n4" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ck4n4 webserver-deployment-dd94f59b7- deployment-8364 f40f3f8d-5a8b-4d0d-9b73-38a4767a5202 209281 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7ef477 0xb7ef478}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.533: INFO: Pod "webserver-deployment-dd94f59b7-ds92f" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ds92f webserver-deployment-dd94f59b7- deployment-8364 ee5f19c3-ff4b-4588-8747-cea52024d52c 209330 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7ef607 0xb7ef608}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.534: INFO: Pod "webserver-deployment-dd94f59b7-h8hlf" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h8hlf webserver-deployment-dd94f59b7- deployment-8364 c7c7a5db-b478-434e-b667-aaa6c1cc1218 209328 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7ef797 0xb7ef798}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.536: INFO: Pod "webserver-deployment-dd94f59b7-h8r7g" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h8r7g webserver-deployment-dd94f59b7- deployment-8364 34cd3a63-100b-45a8-bcbf-05453c5069ec 209128 0 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7ef937 0xb7ef938}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.92,StartTime:2021-01-11 17:50:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 17:50:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://98224081d3884430a4f6502f267c862684efac243cbf6660785c051c2c6b2404,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.537: INFO: Pod "webserver-deployment-dd94f59b7-hhf6l" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hhf6l webserver-deployment-dd94f59b7- deployment-8364 884cdb3d-e6da-49d0-b613-0b7ffdd315a7 209136 0 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7efae7 0xb7efae8}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.93,StartTime:2021-01-11 17:50:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 17:50:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d4dfe68f02fc8e7d3c2387b74a848ba1d208c39b8eff741f2c1411610db0b22f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.538: INFO: Pod "webserver-deployment-dd94f59b7-jn4sx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jn4sx webserver-deployment-dd94f59b7- deployment-8364 6cdaf3cd-abaa-4bab-aaf5-837f877db696 209357 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7efc97 0xb7efc98}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.540: INFO: Pod "webserver-deployment-dd94f59b7-jthjq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jthjq webserver-deployment-dd94f59b7- deployment-8364 b7ce12f2-e2d3-4cf8-b8d8-8ab0c02eacc4 209344 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7efe57 0xb7efe58}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.541: INFO: Pod "webserver-deployment-dd94f59b7-lg657" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lg657 webserver-deployment-dd94f59b7- deployment-8364 b35ba3c0-dc2f-417d-a649-3c5e4abd2efe 209346 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0xb7effe7 0xb7effe8}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.542: INFO: Pod "webserver-deployment-dd94f59b7-ljb7g" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ljb7g webserver-deployment-dd94f59b7- deployment-8364 a08960ee-324e-44d1-9a4f-ebd7bfc8fb5a 209323 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0x9caa187 0x9caa188}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.544: INFO: Pod "webserver-deployment-dd94f59b7-pqplp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pqplp webserver-deployment-dd94f59b7- deployment-8364 8a7f28f6-fdef-4480-a74a-22a527035f0e 209336 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0x9caa317 0x9caa318}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-11 17:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.545: INFO: Pod "webserver-deployment-dd94f59b7-sd6fm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sd6fm webserver-deployment-dd94f59b7- deployment-8364 499feeaf-fca1-4e01-8498-10c1be435cdf 209123 0 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0x9caa4a7 0x9caa4a8}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.80,StartTime:2021-01-11 17:50:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 17:50:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5c6acc9a08868a1e4d211bc8ed5a01677df17451538409effe65f685e106cfde,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.546: INFO: Pod "webserver-deployment-dd94f59b7-szzdq" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-szzdq webserver-deployment-dd94f59b7- deployment-8364 a98f8dcb-01d4-4ae0-a26f-01f6e5a75f16 209153 0 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0x9caa667 0x9caa668}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.81\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.81,StartTime:2021-01-11 17:50:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 17:50:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a155d0ea308b5656353c870da52bbe74f5167a093ad6ab12d5b9ce330182b18,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.548: INFO: Pod "webserver-deployment-dd94f59b7-tvhb4" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tvhb4 webserver-deployment-dd94f59b7- deployment-8364 a2f310b8-5d7b-43fb-b843-1761622208c4 209121 0 2021-01-11 17:50:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0x9caa817 0x9caa818}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.91,StartTime:2021-01-11 17:50:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 17:50:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a34f98b34b664642f30b05dc90e933428a5cc79189c27d910e3db43da85ef8c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 17:50:31.549: INFO: Pod "webserver-deployment-dd94f59b7-xs2m4" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xs2m4 webserver-deployment-dd94f59b7- deployment-8364 8a367e46-8b5d-47c1-99f4-2beb7ddc2d07 209316 0 2021-01-11 17:50:28 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 edb9c823-5d96-4180-a9cf-07bcd32f372e 0x9caa9c7 0x9caa9c8}] [] [{kube-controller-manager Update v1 2021-01-11 17:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edb9c823-5d96-4180-a9cf-07bcd32f372e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 17:50:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5v26g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5v26g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5v26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 17:50:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-11 17:50:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:50:31.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8364" for this suite. • [SLOW TEST:20.349 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":309,"completed":200,"skipped":3548,"failed":0} [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:50:31.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-ba7276d9-8442-4d93-8bad-658a25644197 in namespace container-probe-5419 Jan 11 17:51:00.207: INFO: Started pod liveness-ba7276d9-8442-4d93-8bad-658a25644197 in namespace container-probe-5419 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 17:51:00.221: INFO: Initial restart count of pod liveness-ba7276d9-8442-4d93-8bad-658a25644197 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:55:01.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5419" for this suite. • [SLOW TEST:269.488 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":309,"completed":201,"skipped":3548,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:55:01.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-eefd13db-023b-4708-9b75-7b421078556c STEP: Creating a pod to test consume configMaps Jan 11 17:55:01.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-77c71426-373a-41d6-a279-a374cba26708" in namespace "configmap-813" to be "Succeeded or Failed" Jan 11 17:55:01.839: INFO: Pod "pod-configmaps-77c71426-373a-41d6-a279-a374cba26708": Phase="Pending", Reason="", readiness=false. Elapsed: 7.552612ms Jan 11 17:55:04.608: INFO: Pod "pod-configmaps-77c71426-373a-41d6-a279-a374cba26708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.776268354s Jan 11 17:55:06.614: INFO: Pod "pod-configmaps-77c71426-373a-41d6-a279-a374cba26708": Phase="Pending", Reason="", readiness=false. Elapsed: 4.782449042s Jan 11 17:55:08.624: INFO: Pod "pod-configmaps-77c71426-373a-41d6-a279-a374cba26708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.792332375s STEP: Saw pod success Jan 11 17:55:08.625: INFO: Pod "pod-configmaps-77c71426-373a-41d6-a279-a374cba26708" satisfied condition "Succeeded or Failed" Jan 11 17:55:08.634: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-77c71426-373a-41d6-a279-a374cba26708 container agnhost-container: STEP: delete the pod Jan 11 17:55:08.734: INFO: Waiting for pod pod-configmaps-77c71426-373a-41d6-a279-a374cba26708 to disappear Jan 11 17:55:08.864: INFO: Pod pod-configmaps-77c71426-373a-41d6-a279-a374cba26708 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:55:08.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-813" for this suite. • [SLOW TEST:7.475 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":202,"skipped":3566,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:55:08.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-e2f5cba6-6e43-4e2f-aa3c-fc357f2e0cd4 STEP: Creating a pod to test consume configMaps Jan 11 17:55:09.026: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f89fae3-9235-47c0-bfec-3a6345d58de8" in namespace "projected-3916" to be "Succeeded or Failed" Jan 11 17:55:09.038: INFO: Pod "pod-projected-configmaps-3f89fae3-9235-47c0-bfec-3a6345d58de8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.423506ms Jan 11 17:55:11.047: INFO: Pod "pod-projected-configmaps-3f89fae3-9235-47c0-bfec-3a6345d58de8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020384125s Jan 11 17:55:13.055: INFO: Pod "pod-projected-configmaps-3f89fae3-9235-47c0-bfec-3a6345d58de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028212577s STEP: Saw pod success Jan 11 17:55:13.055: INFO: Pod "pod-projected-configmaps-3f89fae3-9235-47c0-bfec-3a6345d58de8" satisfied condition "Succeeded or Failed" Jan 11 17:55:13.061: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-3f89fae3-9235-47c0-bfec-3a6345d58de8 container agnhost-container: STEP: delete the pod Jan 11 17:55:13.116: INFO: Waiting for pod pod-projected-configmaps-3f89fae3-9235-47c0-bfec-3a6345d58de8 to disappear Jan 11 17:55:13.127: INFO: Pod pod-projected-configmaps-3f89fae3-9235-47c0-bfec-3a6345d58de8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:55:13.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3916" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":203,"skipped":3570,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:55:13.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-70a98154-5d6c-4410-b77b-47dd395f8284 STEP: Creating a pod to test consume secrets Jan 11 17:55:13.272: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd" in namespace "projected-9042" to be "Succeeded or Failed" Jan 11 17:55:13.285: INFO: Pod "pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.990437ms Jan 11 17:55:15.363: INFO: Pod "pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090115945s Jan 11 17:55:17.370: INFO: Pod "pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd": Phase="Running", Reason="", readiness=true. Elapsed: 4.097535849s Jan 11 17:55:19.378: INFO: Pod "pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105987279s STEP: Saw pod success Jan 11 17:55:19.379: INFO: Pod "pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd" satisfied condition "Succeeded or Failed" Jan 11 17:55:19.385: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd container projected-secret-volume-test: STEP: delete the pod Jan 11 17:55:19.434: INFO: Waiting for pod pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd to disappear Jan 11 17:55:19.463: INFO: Pod pod-projected-secrets-49271cce-2445-4200-9b0c-56fa0ab177bd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:55:19.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9042" for this suite. • [SLOW TEST:6.337 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":204,"skipped":3582,"failed":0} [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:55:19.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3477, will wait for the garbage collector to delete the pods Jan 11 17:55:27.609: INFO: Deleting Job.batch foo took: 10.1117ms Jan 11 17:55:28.210: INFO: Terminating Job.batch foo pods took: 600.875004ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:56:30.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3477" for this suite. • [SLOW TEST:70.756 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":309,"completed":205,"skipped":3582,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:56:30.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:56:34.508: INFO: Waiting up to 5m0s for pod "client-envvars-b7df46d6-3b1e-4cc0-b312-73ede808b3d3" in namespace "pods-7432" to be "Succeeded or Failed" Jan 11 17:56:34.526: INFO: Pod "client-envvars-b7df46d6-3b1e-4cc0-b312-73ede808b3d3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.135098ms Jan 11 17:56:36.675: INFO: Pod "client-envvars-b7df46d6-3b1e-4cc0-b312-73ede808b3d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167428675s Jan 11 17:56:38.683: INFO: Pod "client-envvars-b7df46d6-3b1e-4cc0-b312-73ede808b3d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174797682s STEP: Saw pod success Jan 11 17:56:38.683: INFO: Pod "client-envvars-b7df46d6-3b1e-4cc0-b312-73ede808b3d3" satisfied condition "Succeeded or Failed" Jan 11 17:56:38.687: INFO: Trying to get logs from node leguer-worker pod client-envvars-b7df46d6-3b1e-4cc0-b312-73ede808b3d3 container env3cont: STEP: delete the pod Jan 11 17:56:38.731: INFO: Waiting for pod client-envvars-b7df46d6-3b1e-4cc0-b312-73ede808b3d3 to disappear Jan 11 17:56:38.737: INFO: Pod client-envvars-b7df46d6-3b1e-4cc0-b312-73ede808b3d3 no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:56:38.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7432" for this suite. • [SLOW TEST:8.555 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":309,"completed":206,"skipped":3600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:56:38.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:56:38.888: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:56:39.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8923" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":309,"completed":207,"skipped":3623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:56:39.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service endpoint-test2 in namespace services-3886 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3886 to expose endpoints map[] Jan 11 17:56:39.724: INFO: successfully validated that service endpoint-test2 in namespace services-3886 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3886 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3886 to expose endpoints map[pod1:[80]] Jan 11 17:56:43.817: INFO: successfully validated that service endpoint-test2 in namespace services-3886 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-3886 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3886 to expose endpoints map[pod1:[80] pod2:[80]] Jan 11 17:56:46.904: INFO: successfully validated that service endpoint-test2 in namespace services-3886 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-3886 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3886 to expose endpoints map[pod2:[80]] Jan 11 17:56:47.012: INFO: successfully validated that service endpoint-test2 in namespace services-3886 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-3886 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3886 to expose endpoints map[] Jan 11 17:56:47.371: INFO: successfully validated that service endpoint-test2 in namespace services-3886 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:56:47.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3886" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:8.135 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":309,"completed":208,"skipped":3651,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:56:47.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 17:57:00.294: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 17:57:02.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984620, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984620, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984620, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984620, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 17:57:04.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984620, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984620, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984620, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984620, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 17:57:07.460: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:57:07.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-161" for this suite. STEP: Destroying namespace "webhook-161-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:19.963 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":309,"completed":209,"skipped":3665,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:57:07.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 11 17:57:07.780: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:57:16.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7118" for this suite. • [SLOW TEST:9.064 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":309,"completed":210,"skipped":3666,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:57:16.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:57:16.793: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:57:18.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7423" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":309,"completed":211,"skipped":3692,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:57:18.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 11 17:57:26.293: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:26.313: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:28.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:28.450: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:30.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:30.330: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:32.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:32.323: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:34.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:34.323: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:36.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:36.321: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:38.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:38.322: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:40.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:40.322: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:42.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:42.323: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:44.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:44.322: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:46.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:48.125: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:48.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:48.323: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:50.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:50.322: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:52.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:52.340: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:54.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:54.322: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:56.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:56.321: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:57:58.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:57:58.357: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:00.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:00.336: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:02.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:02.362: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:04.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:04.321: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:06.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:06.337: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:08.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:08.322: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:10.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:10.324: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:12.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:12.321: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:14.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:14.386: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:16.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:16.328: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:18.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:18.321: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:20.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:20.323: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:22.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:22.324: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:24.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:24.324: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:26.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:26.323: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:28.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:28.322: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 17:58:30.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 17:58:30.322: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:58:30.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3829" for this suite. • [SLOW TEST:72.312 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":309,"completed":212,"skipped":3697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:58:30.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-a3d367c5-0eff-40e2-af39-a6b8df518a01 STEP: Creating configMap with name cm-test-opt-upd-09667d60-dd10-4eee-9f1e-91ccffd7d2eb STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a3d367c5-0eff-40e2-af39-a6b8df518a01 STEP: Updating configmap cm-test-opt-upd-09667d60-dd10-4eee-9f1e-91ccffd7d2eb STEP: Creating configMap with name cm-test-opt-create-d522db84-ee22-4d55-8c50-058afa6b52e7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:58:42.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1534" for this suite. • [SLOW TEST:12.575 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":213,"skipped":3777,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:58:42.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 17:58:43.037: INFO: Creating ReplicaSet my-hostname-basic-9eabd9f6-d80e-4b52-b87f-5b25849d8640 Jan 11 17:58:43.050: INFO: Pod name my-hostname-basic-9eabd9f6-d80e-4b52-b87f-5b25849d8640: Found 0 pods out of 1 Jan 11 17:58:48.058: INFO: Pod name my-hostname-basic-9eabd9f6-d80e-4b52-b87f-5b25849d8640: Found 1 pods out of 1 Jan 11 17:58:48.058: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9eabd9f6-d80e-4b52-b87f-5b25849d8640" is running Jan 11 17:58:48.064: INFO: Pod "my-hostname-basic-9eabd9f6-d80e-4b52-b87f-5b25849d8640-dm2g2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-11 17:58:43 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-11 17:58:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-11 17:58:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-11 17:58:43 +0000 UTC Reason: Message:}]) Jan 11 17:58:48.067: INFO: Trying to dial the pod Jan 11 17:58:53.083: INFO: Controller my-hostname-basic-9eabd9f6-d80e-4b52-b87f-5b25849d8640: Got expected result from replica 1 [my-hostname-basic-9eabd9f6-d80e-4b52-b87f-5b25849d8640-dm2g2]: "my-hostname-basic-9eabd9f6-d80e-4b52-b87f-5b25849d8640-dm2g2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:58:53.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1781" for this suite. • [SLOW TEST:10.155 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":309,"completed":214,"skipped":3795,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:58:53.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 17:59:04.295: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 17:59:06.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984744, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984744, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984744, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745984744, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 17:59:09.416: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 17:59:09.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6544" for this suite. STEP: Destroying namespace "webhook-6544-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.644 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":309,"completed":215,"skipped":3824,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 17:59:09.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 11 17:59:11.230: INFO: Pod name wrapped-volume-race-3fda58b4-6926-43d0-b407-7f559020e286: Found 0 pods out of 5 Jan 11 17:59:16.253: INFO: Pod name wrapped-volume-race-3fda58b4-6926-43d0-b407-7f559020e286: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3fda58b4-6926-43d0-b407-7f559020e286 in namespace emptydir-wrapper-7702, will wait for the garbage collector to delete the pods Jan 11 17:59:32.428: INFO: Deleting ReplicationController wrapped-volume-race-3fda58b4-6926-43d0-b407-7f559020e286 took: 12.340674ms Jan 11 17:59:33.029: INFO: Terminating ReplicationController wrapped-volume-race-3fda58b4-6926-43d0-b407-7f559020e286 pods took: 601.089831ms STEP: Creating RC which spawns configmap-volume pods Jan 11 18:00:09.983: INFO: Pod name wrapped-volume-race-3d01659e-a631-4ef5-86ce-bd882d42a332: Found 0 pods out of 5 Jan 11 18:00:15.022: INFO: Pod name wrapped-volume-race-3d01659e-a631-4ef5-86ce-bd882d42a332: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3d01659e-a631-4ef5-86ce-bd882d42a332 in namespace emptydir-wrapper-7702, will wait for the garbage collector to delete the pods Jan 11 18:00:32.136: INFO: Deleting ReplicationController wrapped-volume-race-3d01659e-a631-4ef5-86ce-bd882d42a332 took: 23.535599ms Jan 11 18:00:32.737: INFO: Terminating ReplicationController wrapped-volume-race-3d01659e-a631-4ef5-86ce-bd882d42a332 pods took: 600.82175ms STEP: Creating RC which spawns configmap-volume pods Jan 11 18:01:10.178: INFO: Pod name wrapped-volume-race-f98ccfb6-848a-4c01-afe0-69480c0a60c5: Found 0 pods out of 5 Jan 11 18:01:15.200: INFO: Pod name wrapped-volume-race-f98ccfb6-848a-4c01-afe0-69480c0a60c5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f98ccfb6-848a-4c01-afe0-69480c0a60c5 in namespace emptydir-wrapper-7702, will wait for the garbage collector to delete the pods Jan 11 18:01:35.361: INFO: Deleting ReplicationController wrapped-volume-race-f98ccfb6-848a-4c01-afe0-69480c0a60c5 took: 10.362094ms Jan 11 18:01:35.962: INFO: Terminating ReplicationController wrapped-volume-race-f98ccfb6-848a-4c01-afe0-69480c0a60c5 pods took: 600.833413ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:02:10.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7702" for this suite. • [SLOW TEST:181.009 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":309,"completed":216,"skipped":3827,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:02:10.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller Jan 11 18:02:10.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 create -f -' Jan 11 18:02:17.469: INFO: stderr: "" Jan 11 18:02:17.469: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 18:02:17.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:02:18.766: INFO: stderr: "" Jan 11 18:02:18.766: INFO: stdout: "update-demo-nautilus-79x45 update-demo-nautilus-sfht7 " Jan 11 18:02:18.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-79x45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 18:02:19.955: INFO: stderr: "" Jan 11 18:02:19.955: INFO: stdout: "" Jan 11 18:02:19.955: INFO: update-demo-nautilus-79x45 is created but not running Jan 11 18:02:24.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:02:26.173: INFO: stderr: "" Jan 11 18:02:26.174: INFO: stdout: "update-demo-nautilus-79x45 update-demo-nautilus-sfht7 " Jan 11 18:02:26.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-79x45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 18:02:27.417: INFO: stderr: "" Jan 11 18:02:27.418: INFO: stdout: "true" Jan 11 18:02:27.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-79x45 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 18:02:28.634: INFO: stderr: "" Jan 11 18:02:28.634: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 18:02:28.634: INFO: validating pod update-demo-nautilus-79x45 Jan 11 18:02:28.641: INFO: got data: { "image": "nautilus.jpg" } Jan 11 18:02:28.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 18:02:28.641: INFO: update-demo-nautilus-79x45 is verified up and running Jan 11 18:02:28.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-sfht7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 18:02:29.759: INFO: stderr: "" Jan 11 18:02:29.760: INFO: stdout: "true" Jan 11 18:02:29.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-sfht7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 18:02:30.897: INFO: stderr: "" Jan 11 18:02:30.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 18:02:30.897: INFO: validating pod update-demo-nautilus-sfht7 Jan 11 18:02:30.904: INFO: got data: { "image": "nautilus.jpg" } Jan 11 18:02:30.904: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 18:02:30.905: INFO: update-demo-nautilus-sfht7 is verified up and running STEP: scaling down the replication controller Jan 11 18:02:30.922: INFO: scanned /root for discovery docs: Jan 11 18:02:30.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 11 18:02:32.207: INFO: stderr: "" Jan 11 18:02:32.207: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 18:02:32.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:02:33.472: INFO: stderr: "" Jan 11 18:02:33.472: INFO: stdout: "update-demo-nautilus-79x45 update-demo-nautilus-sfht7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 18:02:38.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:02:39.659: INFO: stderr: "" Jan 11 18:02:39.659: INFO: stdout: "update-demo-nautilus-79x45 update-demo-nautilus-sfht7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 18:02:44.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:02:45.989: INFO: stderr: "" Jan 11 18:02:45.989: INFO: stdout: "update-demo-nautilus-79x45 update-demo-nautilus-sfht7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 18:02:50.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:02:52.373: INFO: stderr: "" Jan 11 18:02:52.373: INFO: stdout: "update-demo-nautilus-79x45 update-demo-nautilus-sfht7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 18:02:57.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:02:58.664: INFO: stderr: "" Jan 11 18:02:58.664: INFO: stdout: "update-demo-nautilus-79x45 update-demo-nautilus-sfht7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 18:03:03.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:03:04.836: INFO: stderr: "" Jan 11 18:03:04.836: INFO: stdout: "update-demo-nautilus-79x45 update-demo-nautilus-sfht7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 18:03:09.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:03:10.983: INFO: stderr: "" Jan 11 18:03:10.983: INFO: stdout: "update-demo-nautilus-79x45 " Jan 11 18:03:10.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-79x45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 18:03:12.177: INFO: stderr: "" Jan 11 18:03:12.177: INFO: stdout: "true" Jan 11 18:03:12.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-79x45 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 18:03:13.398: INFO: stderr: "" Jan 11 18:03:13.398: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 18:03:13.398: INFO: validating pod update-demo-nautilus-79x45 Jan 11 18:03:13.404: INFO: got data: { "image": "nautilus.jpg" } Jan 11 18:03:13.404: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 18:03:13.404: INFO: update-demo-nautilus-79x45 is verified up and running STEP: scaling up the replication controller Jan 11 18:03:13.416: INFO: scanned /root for discovery docs: Jan 11 18:03:13.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 11 18:03:14.761: INFO: stderr: "" Jan 11 18:03:14.761: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 18:03:14.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:03:15.949: INFO: stderr: "" Jan 11 18:03:15.949: INFO: stdout: "update-demo-nautilus-6wp8q update-demo-nautilus-79x45 " Jan 11 18:03:15.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-6wp8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 18:03:17.125: INFO: stderr: "" Jan 11 18:03:17.125: INFO: stdout: "" Jan 11 18:03:17.125: INFO: update-demo-nautilus-6wp8q is created but not running Jan 11 18:03:22.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 18:03:23.446: INFO: stderr: "" Jan 11 18:03:23.446: INFO: stdout: "update-demo-nautilus-6wp8q update-demo-nautilus-79x45 " Jan 11 18:03:23.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-6wp8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 18:03:24.627: INFO: stderr: "" Jan 11 18:03:24.628: INFO: stdout: "true" Jan 11 18:03:24.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-6wp8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 18:03:25.794: INFO: stderr: "" Jan 11 18:03:25.794: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 18:03:25.794: INFO: validating pod update-demo-nautilus-6wp8q Jan 11 18:03:25.801: INFO: got data: { "image": "nautilus.jpg" } Jan 11 18:03:25.801: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 18:03:25.801: INFO: update-demo-nautilus-6wp8q is verified up and running Jan 11 18:03:25.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-79x45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 18:03:26.981: INFO: stderr: "" Jan 11 18:03:26.981: INFO: stdout: "true" Jan 11 18:03:26.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods update-demo-nautilus-79x45 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 18:03:28.130: INFO: stderr: "" Jan 11 18:03:28.130: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 18:03:28.130: INFO: validating pod update-demo-nautilus-79x45 Jan 11 18:03:28.136: INFO: got data: { "image": "nautilus.jpg" } Jan 11 18:03:28.137: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 18:03:28.137: INFO: update-demo-nautilus-79x45 is verified up and running STEP: using delete to clean up resources Jan 11 18:03:28.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 delete --grace-period=0 --force -f -' Jan 11 18:03:29.325: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 18:03:29.326: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 11 18:03:29.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get rc,svc -l name=update-demo --no-headers' Jan 11 18:03:30.447: INFO: stderr: "No resources found in kubectl-9336 namespace.\n" Jan 11 18:03:30.447: INFO: stdout: "" Jan 11 18:03:30.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9336 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 18:03:31.718: INFO: stderr: "" Jan 11 18:03:31.718: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:03:31.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9336" for this suite. • [SLOW TEST:80.978 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":309,"completed":217,"skipped":3843,"failed":0} [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:03:31.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-f2a46a9f-e429-43fe-ac4a-2743a6181681 STEP: Creating a pod to test consume secrets Jan 11 18:03:32.060: INFO: Waiting up to 5m0s for pod "pod-secrets-b65543fc-3c9f-4149-9f9e-717a2bb2d737" in namespace "secrets-6899" to be "Succeeded or Failed" Jan 11 18:03:32.077: INFO: Pod "pod-secrets-b65543fc-3c9f-4149-9f9e-717a2bb2d737": Phase="Pending", Reason="", readiness=false. Elapsed: 16.657357ms Jan 11 18:03:34.108: INFO: Pod "pod-secrets-b65543fc-3c9f-4149-9f9e-717a2bb2d737": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047489015s Jan 11 18:03:36.115: INFO: Pod "pod-secrets-b65543fc-3c9f-4149-9f9e-717a2bb2d737": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054489724s STEP: Saw pod success Jan 11 18:03:36.115: INFO: Pod "pod-secrets-b65543fc-3c9f-4149-9f9e-717a2bb2d737" satisfied condition "Succeeded or Failed" Jan 11 18:03:36.119: INFO: Trying to get logs from node leguer-worker pod pod-secrets-b65543fc-3c9f-4149-9f9e-717a2bb2d737 container secret-volume-test: STEP: delete the pod Jan 11 18:03:36.175: INFO: Waiting for pod pod-secrets-b65543fc-3c9f-4149-9f9e-717a2bb2d737 to disappear Jan 11 18:03:36.186: INFO: Pod pod-secrets-b65543fc-3c9f-4149-9f9e-717a2bb2d737 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:03:36.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6899" for this suite. STEP: Destroying namespace "secret-namespace-7439" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":309,"completed":218,"skipped":3843,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:03:36.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 11 18:03:36.295: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 18:03:36.313: INFO: Waiting for terminating namespaces to be deleted... Jan 11 18:03:36.318: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 11 18:03:36.335: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.336: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 18:03:36.336: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.336: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 18:03:36.336: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.336: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 18:03:36.336: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.336: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 18:03:36.336: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.336: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 18:03:36.336: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.336: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 18:03:36.337: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.337: INFO: Container chaos-mesh ready: true, restart count 0 Jan 11 18:03:36.337: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.337: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 18:03:36.337: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.337: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 18:03:36.337: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.337: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:03:36.337: INFO: update-demo-nautilus-79x45 from kubectl-9336 started at 2021-01-11 18:02:17 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.337: INFO: Container update-demo ready: false, restart count 0 Jan 11 18:03:36.337: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 11 18:03:36.355: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.355: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 18:03:36.355: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.355: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 18:03:36.355: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.355: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 18:03:36.355: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.355: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 18:03:36.355: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.355: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 18:03:36.355: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.355: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 18:03:36.356: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.356: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 18:03:36.356: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.356: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 18:03:36.356: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.356: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:03:36.356: INFO: update-demo-nautilus-6wp8q from kubectl-9336 started at 2021-01-11 18:03:14 +0000 UTC (1 container statuses recorded) Jan 11 18:03:36.356: INFO: Container update-demo ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-00e29738-8128-48f6-b33a-5db2ed7c6ef7 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-00e29738-8128-48f6-b33a-5db2ed7c6ef7 off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-00e29738-8128-48f6-b33a-5db2ed7c6ef7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:03:44.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8904" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:8.361 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":309,"completed":219,"skipped":3849,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:03:44.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 18:04:00.087: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 18:04:02.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985040, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985040, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985040, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985039, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 18:04:05.208: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:04:17.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4325" for this suite. STEP: Destroying namespace "webhook-4325-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:32.956 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":309,"completed":220,"skipped":3867,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:04:17.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 11 18:04:22.230: INFO: Successfully updated pod "annotationupdatee044d89c-6c98-4ec8-917c-2e6945164821" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:04:26.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1780" for this suite. • [SLOW TEST:8.743 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":309,"completed":221,"skipped":3888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:04:26.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod test-webserver-3f8085f8-6a87-4131-b82c-5c640009dad1 in namespace container-probe-396 Jan 11 18:04:30.437: INFO: Started pod test-webserver-3f8085f8-6a87-4131-b82c-5c640009dad1 in namespace container-probe-396 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 18:04:30.441: INFO: Initial restart count of pod test-webserver-3f8085f8-6a87-4131-b82c-5c640009dad1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:08:31.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-396" for this suite. • [SLOW TEST:245.314 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":309,"completed":222,"skipped":3925,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:08:31.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 11 18:08:31.978: INFO: Waiting up to 5m0s for pod "pod-0c87f5ef-2cd8-4da2-b84a-4e96d7b13b40" in namespace "emptydir-1500" to be "Succeeded or Failed" Jan 11 18:08:32.114: INFO: Pod "pod-0c87f5ef-2cd8-4da2-b84a-4e96d7b13b40": Phase="Pending", Reason="", readiness=false. Elapsed: 135.881075ms Jan 11 18:08:34.121: INFO: Pod "pod-0c87f5ef-2cd8-4da2-b84a-4e96d7b13b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143187822s Jan 11 18:08:36.245: INFO: Pod "pod-0c87f5ef-2cd8-4da2-b84a-4e96d7b13b40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.266910567s STEP: Saw pod success Jan 11 18:08:36.245: INFO: Pod "pod-0c87f5ef-2cd8-4da2-b84a-4e96d7b13b40" satisfied condition "Succeeded or Failed" Jan 11 18:08:36.250: INFO: Trying to get logs from node leguer-worker pod pod-0c87f5ef-2cd8-4da2-b84a-4e96d7b13b40 container test-container: STEP: delete the pod Jan 11 18:08:36.299: INFO: Waiting for pod pod-0c87f5ef-2cd8-4da2-b84a-4e96d7b13b40 to disappear Jan 11 18:08:36.306: INFO: Pod pod-0c87f5ef-2cd8-4da2-b84a-4e96d7b13b40 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:08:36.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1500" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":223,"skipped":3936,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:08:36.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:08:40.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-439" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":309,"completed":224,"skipped":3942,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:08:40.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:08:48.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3200" for this suite. • [SLOW TEST:7.780 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":309,"completed":225,"skipped":3950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:08:48.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 11 18:08:48.386: INFO: Waiting up to 5m0s for pod "pod-e2cc35bb-1516-4963-bc64-dce2282b9c98" in namespace "emptydir-1253" to be "Succeeded or Failed" Jan 11 18:08:48.442: INFO: Pod "pod-e2cc35bb-1516-4963-bc64-dce2282b9c98": Phase="Pending", Reason="", readiness=false. Elapsed: 55.739985ms Jan 11 18:08:50.545: INFO: Pod "pod-e2cc35bb-1516-4963-bc64-dce2282b9c98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1581036s Jan 11 18:08:52.551: INFO: Pod "pod-e2cc35bb-1516-4963-bc64-dce2282b9c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164898908s STEP: Saw pod success Jan 11 18:08:52.552: INFO: Pod "pod-e2cc35bb-1516-4963-bc64-dce2282b9c98" satisfied condition "Succeeded or Failed" Jan 11 18:08:52.557: INFO: Trying to get logs from node leguer-worker2 pod pod-e2cc35bb-1516-4963-bc64-dce2282b9c98 container test-container: STEP: delete the pod Jan 11 18:08:52.633: INFO: Waiting for pod pod-e2cc35bb-1516-4963-bc64-dce2282b9c98 to disappear Jan 11 18:08:52.661: INFO: Pod pod-e2cc35bb-1516-4963-bc64-dce2282b9c98 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:08:52.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1253" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":226,"skipped":3979,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:08:52.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:08:59.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4447" for this suite. • [SLOW TEST:7.336 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":309,"completed":227,"skipped":3985,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:09:00.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:09:00.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2038" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":309,"completed":228,"skipped":4003,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:09:00.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0111 18:09:01.616335 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 18:10:04.019: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:10:04.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9807" for this suite. • [SLOW TEST:63.697 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":309,"completed":229,"skipped":4008,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:10:04.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:10:09.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9986" for this suite. • [SLOW TEST:5.231 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":309,"completed":230,"skipped":4015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:10:09.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test env composition Jan 11 18:10:09.375: INFO: Waiting up to 5m0s for pod "var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de" in namespace "var-expansion-2905" to be "Succeeded or Failed" Jan 11 18:10:09.409: INFO: Pod "var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de": Phase="Pending", Reason="", readiness=false. Elapsed: 33.974131ms Jan 11 18:10:11.416: INFO: Pod "var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041344853s Jan 11 18:10:13.425: INFO: Pod "var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de": Phase="Running", Reason="", readiness=true. Elapsed: 4.050056839s Jan 11 18:10:15.434: INFO: Pod "var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059563462s STEP: Saw pod success Jan 11 18:10:15.435: INFO: Pod "var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de" satisfied condition "Succeeded or Failed" Jan 11 18:10:15.440: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de container dapi-container: STEP: delete the pod Jan 11 18:10:15.482: INFO: Waiting for pod var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de to disappear Jan 11 18:10:15.488: INFO: Pod var-expansion-2d2129ea-27cb-4cf1-9be7-eb48ca5b75de no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:10:15.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2905" for this suite. • [SLOW TEST:6.234 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":309,"completed":231,"skipped":4041,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:10:15.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:10:15.679: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 11 18:10:20.692: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 11 18:10:20.692: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 18:10:20.802: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1975 f4b56c7f-eb96-47a9-8cb0-f4423721d053 213933 1 2021-01-11 18:10:20 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-01-11 18:10:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb8182b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 11 18:10:20.856: INFO: New ReplicaSet "test-cleanup-deployment-685c4f8568" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-685c4f8568 deployment-1975 449b1afb-a26d-4809-9cab-e86143b338b4 213941 1 2021-01-11 18:10:20 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment f4b56c7f-eb96-47a9-8cb0-f4423721d053 0xb8187c7 0xb8187c8}] [] [{kube-controller-manager Update apps/v1 2021-01-11 18:10:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4b56c7f-eb96-47a9-8cb0-f4423721d053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 685c4f8568,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb818858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 18:10:20.856: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 11 18:10:20.858: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1975 71af04e0-e30d-4edf-8df1-4f9ce6d2a733 213934 1 2021-01-11 18:10:15 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment f4b56c7f-eb96-47a9-8cb0-f4423721d053 0xb8186b7 0xb8186b8}] [] [{e2e.test Update apps/v1 2021-01-11 18:10:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-11 18:10:20 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"f4b56c7f-eb96-47a9-8cb0-f4423721d053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xb818758 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 18:10:20.906: INFO: Pod "test-cleanup-controller-hl4hg" is available: &Pod{ObjectMeta:{test-cleanup-controller-hl4hg test-cleanup-controller- deployment-1975 bbfedbe1-1e26-4ce4-85cd-340fb8c039f1 213917 0 2021-01-11 18:10:15 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 71af04e0-e30d-4edf-8df1-4f9ce6d2a733 0xb818bf7 0xb818bf8}] [] [{kube-controller-manager Update v1 2021-01-11 18:10:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71af04e0-e30d-4edf-8df1-4f9ce6d2a733\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 18:10:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.113\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n75b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n75b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n75b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:10:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:10:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.113,StartTime:2021-01-11 18:10:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 18:10:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://07ce99193bfe0f40f791847c10fc7e76470629563d6f591a62851cbd2a4eb9d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 18:10:20.907: INFO: Pod "test-cleanup-deployment-685c4f8568-rvwrl" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-685c4f8568-rvwrl test-cleanup-deployment-685c4f8568- deployment-1975 4c70b3d0-9a29-4f34-a9d5-0f5713836070 213940 0 2021-01-11 18:10:20 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-685c4f8568 449b1afb-a26d-4809-9cab-e86143b338b4 0xb818db7 0xb818db8}] [] [{kube-controller-manager Update v1 2021-01-11 18:10:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"449b1afb-a26d-4809-9cab-e86143b338b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n75b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n75b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n75b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:10:20.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1975" for this suite. • [SLOW TEST:5.435 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":309,"completed":232,"skipped":4082,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:10:20.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-m9cx4 in namespace proxy-7547 I0111 18:10:21.100510 10 runners.go:190] Created replication controller with name: proxy-service-m9cx4, namespace: proxy-7547, replica count: 1 I0111 18:10:22.152102 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 18:10:23.152978 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 18:10:24.153772 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 18:10:25.154641 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:26.155534 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:27.156586 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:28.157453 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:29.158335 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:30.158986 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:31.159610 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:32.160333 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:33.161379 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 18:10:34.162194 10 runners.go:190] proxy-service-m9cx4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 18:10:34.185: INFO: setup took 13.158961791s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 11 18:10:34.221: INFO: (0) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 34.437096ms) Jan 11 18:10:34.222: INFO: (0) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 35.834802ms) Jan 11 18:10:34.222: INFO: (0) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 35.957337ms) Jan 11 18:10:34.222: INFO: (0) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 34.255276ms) Jan 11 18:10:34.226: INFO: (0) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 40.279323ms) Jan 11 18:10:34.228: INFO: (0) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 42.363533ms) Jan 11 18:10:34.229: INFO: (0) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 42.909539ms) Jan 11 18:10:34.229: INFO: (0) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 42.898849ms) Jan 11 18:10:34.229: INFO: (0) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 43.063059ms) Jan 11 18:10:34.229: INFO: (0) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 43.239845ms) Jan 11 18:10:34.229: INFO: (0) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 43.514323ms) Jan 11 18:10:34.232: INFO: (0) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test (200; 5.594403ms) Jan 11 18:10:34.240: INFO: (1) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 5.803746ms) Jan 11 18:10:34.240: INFO: (1) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 6.655626ms) Jan 11 18:10:34.241: INFO: (1) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 6.555753ms) Jan 11 18:10:34.241: INFO: (1) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 6.289852ms) Jan 11 18:10:34.241: INFO: (1) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 7.017222ms) Jan 11 18:10:34.241: INFO: (1) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: ... (200; 7.636641ms) Jan 11 18:10:34.242: INFO: (1) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 7.958863ms) Jan 11 18:10:34.242: INFO: (1) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 8.1332ms) Jan 11 18:10:34.242: INFO: (1) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 8.257621ms) Jan 11 18:10:34.242: INFO: (1) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 8.216101ms) Jan 11 18:10:34.243: INFO: (1) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 8.660804ms) Jan 11 18:10:34.247: INFO: (2) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 3.882349ms) Jan 11 18:10:34.248: INFO: (2) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 5.196063ms) Jan 11 18:10:34.248: INFO: (2) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 5.424421ms) Jan 11 18:10:34.248: INFO: (2) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 5.557096ms) Jan 11 18:10:34.249: INFO: (2) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 6.002842ms) Jan 11 18:10:34.249: INFO: (2) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 6.182594ms) Jan 11 18:10:34.249: INFO: (2) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 6.314654ms) Jan 11 18:10:34.249: INFO: (2) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 6.365562ms) Jan 11 18:10:34.250: INFO: (2) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 6.578641ms) Jan 11 18:10:34.250: INFO: (2) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 6.587862ms) Jan 11 18:10:34.250: INFO: (2) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 7.044844ms) Jan 11 18:10:34.250: INFO: (2) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: ... (200; 7.317187ms) Jan 11 18:10:34.251: INFO: (2) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 7.5196ms) Jan 11 18:10:34.251: INFO: (2) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 7.700507ms) Jan 11 18:10:34.255: INFO: (3) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 4.230961ms) Jan 11 18:10:34.255: INFO: (3) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 4.22859ms) Jan 11 18:10:34.256: INFO: (3) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 4.980215ms) Jan 11 18:10:34.256: INFO: (3) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 5.14847ms) Jan 11 18:10:34.257: INFO: (3) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 5.363001ms) Jan 11 18:10:34.257: INFO: (3) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 5.49153ms) Jan 11 18:10:34.257: INFO: (3) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test (200; 6.565697ms) Jan 11 18:10:34.258: INFO: (3) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 6.805937ms) Jan 11 18:10:34.258: INFO: (3) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 7.13096ms) Jan 11 18:10:34.258: INFO: (3) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 7.035137ms) Jan 11 18:10:34.263: INFO: (4) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 4.176147ms) Jan 11 18:10:34.264: INFO: (4) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 5.538081ms) Jan 11 18:10:34.264: INFO: (4) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 5.509024ms) Jan 11 18:10:34.265: INFO: (4) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 6.087651ms) Jan 11 18:10:34.265: INFO: (4) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 6.138549ms) Jan 11 18:10:34.265: INFO: (4) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test (200; 7.485189ms) Jan 11 18:10:34.266: INFO: (4) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 7.524559ms) Jan 11 18:10:34.266: INFO: (4) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 7.88309ms) Jan 11 18:10:34.267: INFO: (4) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 8.015024ms) Jan 11 18:10:34.286: INFO: (4) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 27.69679ms) Jan 11 18:10:34.291: INFO: (5) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 4.262313ms) Jan 11 18:10:34.295: INFO: (5) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test (200; 10.232904ms) Jan 11 18:10:34.297: INFO: (5) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 10.35282ms) Jan 11 18:10:34.298: INFO: (5) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 10.765175ms) Jan 11 18:10:34.298: INFO: (5) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 11.000692ms) Jan 11 18:10:34.299: INFO: (5) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 11.444258ms) Jan 11 18:10:34.299: INFO: (5) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 11.694874ms) Jan 11 18:10:34.299: INFO: (5) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 12.099218ms) Jan 11 18:10:34.299: INFO: (5) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 11.949456ms) Jan 11 18:10:34.299: INFO: (5) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 12.450212ms) Jan 11 18:10:34.304: INFO: (6) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 4.167629ms) Jan 11 18:10:34.305: INFO: (6) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 5.534905ms) Jan 11 18:10:34.305: INFO: (6) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 5.342162ms) Jan 11 18:10:34.306: INFO: (6) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test (200; 6.940755ms) Jan 11 18:10:34.307: INFO: (6) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 7.165717ms) Jan 11 18:10:34.307: INFO: (6) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 7.09194ms) Jan 11 18:10:34.307: INFO: (6) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 7.326054ms) Jan 11 18:10:34.307: INFO: (6) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 7.588717ms) Jan 11 18:10:34.307: INFO: (6) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 7.659554ms) Jan 11 18:10:34.308: INFO: (6) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 7.846272ms) Jan 11 18:10:34.308: INFO: (6) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 7.991942ms) Jan 11 18:10:34.308: INFO: (6) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 8.230553ms) Jan 11 18:10:34.308: INFO: (6) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 8.51708ms) Jan 11 18:10:34.309: INFO: (6) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 8.889654ms) Jan 11 18:10:34.309: INFO: (6) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 8.914333ms) Jan 11 18:10:34.312: INFO: (7) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 3.283864ms) Jan 11 18:10:34.314: INFO: (7) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 4.60484ms) Jan 11 18:10:34.314: INFO: (7) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 5.148211ms) Jan 11 18:10:34.314: INFO: (7) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 5.014333ms) Jan 11 18:10:34.314: INFO: (7) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 5.375542ms) Jan 11 18:10:34.314: INFO: (7) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 5.565556ms) Jan 11 18:10:34.315: INFO: (7) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 5.692457ms) Jan 11 18:10:34.315: INFO: (7) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 5.739247ms) Jan 11 18:10:34.315: INFO: (7) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 5.813638ms) Jan 11 18:10:34.315: INFO: (7) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 5.996779ms) Jan 11 18:10:34.315: INFO: (7) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 6.058792ms) Jan 11 18:10:34.315: INFO: (7) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 6.377673ms) Jan 11 18:10:34.316: INFO: (7) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 6.492371ms) Jan 11 18:10:34.316: INFO: (7) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 6.642736ms) Jan 11 18:10:34.316: INFO: (7) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test<... (200; 6.085326ms) Jan 11 18:10:34.323: INFO: (8) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 6.295329ms) Jan 11 18:10:34.323: INFO: (8) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 6.446571ms) Jan 11 18:10:34.323: INFO: (8) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 6.565137ms) Jan 11 18:10:34.323: INFO: (8) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 6.505602ms) Jan 11 18:10:34.323: INFO: (8) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 6.771485ms) Jan 11 18:10:34.323: INFO: (8) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 6.722585ms) Jan 11 18:10:34.324: INFO: (8) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 7.005915ms) Jan 11 18:10:34.324: INFO: (8) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 7.201711ms) Jan 11 18:10:34.324: INFO: (8) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 7.262905ms) Jan 11 18:10:34.324: INFO: (8) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: ... (200; 4.940076ms) Jan 11 18:10:34.330: INFO: (9) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 5.882235ms) Jan 11 18:10:34.330: INFO: (9) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 5.901832ms) Jan 11 18:10:34.330: INFO: (9) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 5.748578ms) Jan 11 18:10:34.330: INFO: (9) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 6.001507ms) Jan 11 18:10:34.330: INFO: (9) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 6.194491ms) Jan 11 18:10:34.331: INFO: (9) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 6.042772ms) Jan 11 18:10:34.331: INFO: (9) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 6.20487ms) Jan 11 18:10:34.331: INFO: (9) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 6.487681ms) Jan 11 18:10:34.331: INFO: (9) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 6.352232ms) Jan 11 18:10:34.333: INFO: (9) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 8.981412ms) Jan 11 18:10:34.334: INFO: (9) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 10.187972ms) Jan 11 18:10:34.335: INFO: (9) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 10.814234ms) Jan 11 18:10:34.335: INFO: (9) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test<... (200; 3.532567ms) Jan 11 18:10:34.340: INFO: (10) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: ... (200; 5.641575ms) Jan 11 18:10:34.342: INFO: (10) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 5.856911ms) Jan 11 18:10:34.342: INFO: (10) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 5.662537ms) Jan 11 18:10:34.342: INFO: (10) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 5.986881ms) Jan 11 18:10:34.342: INFO: (10) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 6.182987ms) Jan 11 18:10:34.342: INFO: (10) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 6.318076ms) Jan 11 18:10:34.342: INFO: (10) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 6.350779ms) Jan 11 18:10:34.343: INFO: (10) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 6.990188ms) Jan 11 18:10:34.343: INFO: (10) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 6.794852ms) Jan 11 18:10:34.343: INFO: (10) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 7.113363ms) Jan 11 18:10:34.347: INFO: (11) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test (200; 5.688248ms) Jan 11 18:10:34.349: INFO: (11) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 5.891046ms) Jan 11 18:10:34.350: INFO: (11) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 5.825158ms) Jan 11 18:10:34.351: INFO: (11) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 6.976424ms) Jan 11 18:10:34.351: INFO: (11) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 7.180786ms) Jan 11 18:10:34.351: INFO: (11) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 7.35662ms) Jan 11 18:10:34.351: INFO: (11) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 7.609142ms) Jan 11 18:10:34.351: INFO: (11) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 7.74778ms) Jan 11 18:10:34.352: INFO: (11) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 7.993534ms) Jan 11 18:10:34.352: INFO: (11) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 8.004571ms) Jan 11 18:10:34.352: INFO: (11) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 8.505343ms) Jan 11 18:10:34.356: INFO: (12) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 3.346882ms) Jan 11 18:10:34.357: INFO: (12) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 4.549551ms) Jan 11 18:10:34.357: INFO: (12) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 4.839625ms) Jan 11 18:10:34.357: INFO: (12) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 5.207745ms) Jan 11 18:10:34.358: INFO: (12) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 5.392277ms) Jan 11 18:10:34.358: INFO: (12) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 5.322865ms) Jan 11 18:10:34.358: INFO: (12) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 5.861864ms) Jan 11 18:10:34.359: INFO: (12) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 6.613132ms) Jan 11 18:10:34.359: INFO: (12) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 6.479172ms) Jan 11 18:10:34.359: INFO: (12) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test<... (200; 6.693228ms) Jan 11 18:10:34.360: INFO: (12) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 7.320349ms) Jan 11 18:10:34.360: INFO: (12) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 7.618624ms) Jan 11 18:10:34.360: INFO: (12) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 7.747799ms) Jan 11 18:10:34.361: INFO: (12) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 8.334122ms) Jan 11 18:10:34.361: INFO: (12) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 7.921407ms) Jan 11 18:10:34.365: INFO: (13) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 3.759182ms) Jan 11 18:10:34.365: INFO: (13) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 4.336706ms) Jan 11 18:10:34.366: INFO: (13) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 4.779223ms) Jan 11 18:10:34.366: INFO: (13) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 5.591688ms) Jan 11 18:10:34.366: INFO: (13) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 5.452899ms) Jan 11 18:10:34.367: INFO: (13) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 6.166163ms) Jan 11 18:10:34.368: INFO: (13) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 6.99549ms) Jan 11 18:10:34.369: INFO: (13) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 8.09392ms) Jan 11 18:10:34.369: INFO: (13) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 7.841079ms) Jan 11 18:10:34.369: INFO: (13) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test<... (200; 8.546717ms) Jan 11 18:10:34.370: INFO: (13) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 8.572598ms) Jan 11 18:10:34.370: INFO: (13) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 8.648381ms) Jan 11 18:10:34.373: INFO: (14) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 3.002536ms) Jan 11 18:10:34.374: INFO: (14) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test<... (200; 5.159661ms) Jan 11 18:10:34.376: INFO: (14) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 5.461045ms) Jan 11 18:10:34.376: INFO: (14) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 5.391811ms) Jan 11 18:10:34.376: INFO: (14) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 5.557673ms) Jan 11 18:10:34.376: INFO: (14) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 5.816982ms) Jan 11 18:10:34.377: INFO: (14) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 6.558138ms) Jan 11 18:10:34.377: INFO: (14) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 6.577431ms) Jan 11 18:10:34.377: INFO: (14) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 6.803485ms) Jan 11 18:10:34.377: INFO: (14) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 7.266909ms) Jan 11 18:10:34.378: INFO: (14) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 7.214259ms) Jan 11 18:10:34.381: INFO: (15) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 3.410333ms) Jan 11 18:10:34.382: INFO: (15) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 4.197583ms) Jan 11 18:10:34.382: INFO: (15) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 4.711062ms) Jan 11 18:10:34.383: INFO: (15) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 5.290498ms) Jan 11 18:10:34.383: INFO: (15) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 5.273258ms) Jan 11 18:10:34.383: INFO: (15) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 5.51692ms) Jan 11 18:10:34.383: INFO: (15) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test (200; 5.90131ms) Jan 11 18:10:34.384: INFO: (15) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 6.097466ms) Jan 11 18:10:34.384: INFO: (15) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 6.312453ms) Jan 11 18:10:34.385: INFO: (15) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 6.558849ms) Jan 11 18:10:34.385: INFO: (15) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 6.747407ms) Jan 11 18:10:34.385: INFO: (15) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 6.797509ms) Jan 11 18:10:34.385: INFO: (15) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 7.149397ms) Jan 11 18:10:34.386: INFO: (15) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 7.467928ms) Jan 11 18:10:34.389: INFO: (16) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: ... (200; 3.725576ms) Jan 11 18:10:34.390: INFO: (16) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 4.523065ms) Jan 11 18:10:34.390: INFO: (16) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 4.625949ms) Jan 11 18:10:34.392: INFO: (16) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 6.056928ms) Jan 11 18:10:34.393: INFO: (16) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 6.87797ms) Jan 11 18:10:34.393: INFO: (16) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 7.170495ms) Jan 11 18:10:34.393: INFO: (16) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 6.852175ms) Jan 11 18:10:34.393: INFO: (16) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 7.202837ms) Jan 11 18:10:34.394: INFO: (16) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 7.617608ms) Jan 11 18:10:34.394: INFO: (16) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 7.690125ms) Jan 11 18:10:34.394: INFO: (16) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 7.684496ms) Jan 11 18:10:34.394: INFO: (16) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 8.100602ms) Jan 11 18:10:34.394: INFO: (16) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 8.090843ms) Jan 11 18:10:34.394: INFO: (16) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 8.465587ms) Jan 11 18:10:34.398: INFO: (17) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 3.292105ms) Jan 11 18:10:34.398: INFO: (17) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 3.7413ms) Jan 11 18:10:34.399: INFO: (17) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 4.137932ms) Jan 11 18:10:34.399: INFO: (17) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 3.987452ms) Jan 11 18:10:34.399: INFO: (17) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 3.927929ms) Jan 11 18:10:34.399: INFO: (17) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 4.692396ms) Jan 11 18:10:34.400: INFO: (17) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 5.05009ms) Jan 11 18:10:34.400: INFO: (17) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname1/proxy/: tls baz (200; 5.234282ms) Jan 11 18:10:34.401: INFO: (17) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 5.767424ms) Jan 11 18:10:34.401: INFO: (17) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 6.167582ms) Jan 11 18:10:34.401: INFO: (17) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 6.368471ms) Jan 11 18:10:34.401: INFO: (17) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname2/proxy/: bar (200; 6.438332ms) Jan 11 18:10:34.401: INFO: (17) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 6.694886ms) Jan 11 18:10:34.401: INFO: (17) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 6.55188ms) Jan 11 18:10:34.402: INFO: (17) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 7.013823ms) Jan 11 18:10:34.402: INFO: (17) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test (200; 4.984933ms) Jan 11 18:10:34.408: INFO: (18) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 5.973218ms) Jan 11 18:10:34.408: INFO: (18) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:460/proxy/: tls baz (200; 6.317058ms) Jan 11 18:10:34.409: INFO: (18) /api/v1/namespaces/proxy-7547/services/https:proxy-service-m9cx4:tlsportname2/proxy/: tls qux (200; 6.780115ms) Jan 11 18:10:34.409: INFO: (18) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 6.608359ms) Jan 11 18:10:34.409: INFO: (18) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 6.954084ms) Jan 11 18:10:34.409: INFO: (18) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 6.781142ms) Jan 11 18:10:34.409: INFO: (18) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname1/proxy/: foo (200; 7.121314ms) Jan 11 18:10:34.409: INFO: (18) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 7.34436ms) Jan 11 18:10:34.409: INFO: (18) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: test<... (200; 7.933264ms) Jan 11 18:10:34.410: INFO: (18) /api/v1/namespaces/proxy-7547/services/proxy-service-m9cx4:portname2/proxy/: bar (200; 8.121036ms) Jan 11 18:10:34.410: INFO: (18) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 8.187465ms) Jan 11 18:10:34.414: INFO: (19) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:162/proxy/: bar (200; 3.808037ms) Jan 11 18:10:34.416: INFO: (19) /api/v1/namespaces/proxy-7547/services/http:proxy-service-m9cx4:portname1/proxy/: foo (200; 5.111654ms) Jan 11 18:10:34.416: INFO: (19) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:462/proxy/: tls qux (200; 5.188469ms) Jan 11 18:10:34.416: INFO: (19) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn/proxy/: test (200; 5.352517ms) Jan 11 18:10:34.416: INFO: (19) /api/v1/namespaces/proxy-7547/pods/http:proxy-service-m9cx4-l9qrn:1080/proxy/: ... (200; 5.575953ms) Jan 11 18:10:34.416: INFO: (19) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:1080/proxy/: test<... (200; 5.337969ms) Jan 11 18:10:34.416: INFO: (19) /api/v1/namespaces/proxy-7547/pods/proxy-service-m9cx4-l9qrn:160/proxy/: foo (200; 5.786831ms) Jan 11 18:10:34.417: INFO: (19) /api/v1/namespaces/proxy-7547/pods/https:proxy-service-m9cx4-l9qrn:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:10:44.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5044" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":234,"skipped":4090,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:10:44.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:10:44.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 11 18:10:44.767: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-11T18:10:44Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-11T18:10:44Z]] name:name1 resourceVersion:214087 uid:84336f0d-aeef-45d9-bee2-541454638a8e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 11 18:10:54.779: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-11T18:10:54Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-11T18:10:54Z]] name:name2 resourceVersion:214134 uid:dd86c251-c168-4348-869d-5bf16f1c38d0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 11 18:11:04.794: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-11T18:10:44Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-11T18:11:04Z]] name:name1 resourceVersion:214154 uid:84336f0d-aeef-45d9-bee2-541454638a8e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 11 18:11:14.807: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-11T18:10:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-11T18:11:14Z]] name:name2 resourceVersion:214174 uid:dd86c251-c168-4348-869d-5bf16f1c38d0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 11 18:11:24.821: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-11T18:10:44Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-11T18:11:04Z]] name:name1 resourceVersion:214195 uid:84336f0d-aeef-45d9-bee2-541454638a8e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 11 18:11:34.835: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-11T18:10:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-11T18:11:14Z]] name:name2 resourceVersion:214220 uid:dd86c251-c168-4348-869d-5bf16f1c38d0] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:11:45.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8385" for this suite. • [SLOW TEST:61.300 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":309,"completed":235,"skipped":4095,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:11:45.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 18:11:56.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 18:11:58.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985516, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985516, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985516, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985516, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 18:12:01.865: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:12:01.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:12:03.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9484" for this suite. STEP: Destroying namespace "webhook-9484-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:17.837 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":309,"completed":236,"skipped":4099,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:12:03.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 11 18:12:03.331: INFO: Waiting up to 5m0s for pod "pod-ee82b39a-3110-4f89-9f5a-8c042c2fd77a" in namespace "emptydir-2873" to be "Succeeded or Failed" Jan 11 18:12:03.398: INFO: Pod "pod-ee82b39a-3110-4f89-9f5a-8c042c2fd77a": Phase="Pending", Reason="", readiness=false. Elapsed: 66.140537ms Jan 11 18:12:05.487: INFO: Pod "pod-ee82b39a-3110-4f89-9f5a-8c042c2fd77a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154859502s Jan 11 18:12:08.171: INFO: Pod "pod-ee82b39a-3110-4f89-9f5a-8c042c2fd77a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.83887002s STEP: Saw pod success Jan 11 18:12:08.171: INFO: Pod "pod-ee82b39a-3110-4f89-9f5a-8c042c2fd77a" satisfied condition "Succeeded or Failed" Jan 11 18:12:08.178: INFO: Trying to get logs from node leguer-worker2 pod pod-ee82b39a-3110-4f89-9f5a-8c042c2fd77a container test-container: STEP: delete the pod Jan 11 18:12:08.831: INFO: Waiting for pod pod-ee82b39a-3110-4f89-9f5a-8c042c2fd77a to disappear Jan 11 18:12:08.839: INFO: Pod pod-ee82b39a-3110-4f89-9f5a-8c042c2fd77a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:12:08.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2873" for this suite. • [SLOW TEST:5.645 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":237,"skipped":4121,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:12:08.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 11 18:12:09.104: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 18:13:09.199: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:13:09.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:13:09.339: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jan 11 18:13:09.346: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:13:09.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7938" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:13:09.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9826" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.663 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":309,"completed":238,"skipped":4130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:13:09.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:13:09.625: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 11 18:13:14.631: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 11 18:13:14.632: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 11 18:13:16.642: INFO: Creating deployment "test-rollover-deployment" Jan 11 18:13:16.669: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 11 18:13:18.682: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 11 18:13:18.695: INFO: Ensure that both replica sets have 1 created replica Jan 11 18:13:18.706: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 11 18:13:18.719: INFO: Updating deployment test-rollover-deployment Jan 11 18:13:18.720: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 11 18:13:20.754: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 11 18:13:20.766: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 11 18:13:20.777: INFO: all replica sets need to contain the pod-template-hash label Jan 11 18:13:20.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985598, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 18:13:22.794: INFO: all replica sets need to contain the pod-template-hash label Jan 11 18:13:22.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985602, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 18:13:24.793: INFO: all replica sets need to contain the pod-template-hash label Jan 11 18:13:24.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985602, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 18:13:26.795: INFO: all replica sets need to contain the pod-template-hash label Jan 11 18:13:26.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985602, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 18:13:28.793: INFO: all replica sets need to contain the pod-template-hash label Jan 11 18:13:28.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985602, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 18:13:30.793: INFO: all replica sets need to contain the pod-template-hash label Jan 11 18:13:30.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985602, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745985596, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 18:13:32.792: INFO: Jan 11 18:13:32.792: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 18:13:32.809: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5119 04ba48d2-aec0-4646-9c74-372a6a0708f7 214671 2 2021-01-11 18:13:16 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-11 18:13:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-11 18:13:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb7b5ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-11 18:13:16 +0000 UTC,LastTransitionTime:2021-01-11 18:13:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2021-01-11 18:13:32 +0000 UTC,LastTransitionTime:2021-01-11 18:13:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 11 18:13:32.819: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-5119 65a2b954-83f3-471f-bd0f-399722a4a5ac 214660 2 2021-01-11 18:13:18 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 04ba48d2-aec0-4646-9c74-372a6a0708f7 0xb7b5ef7 0xb7b5ef8}] [] [{kube-controller-manager Update apps/v1 2021-01-11 18:13:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04ba48d2-aec0-4646-9c74-372a6a0708f7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb7b5f88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 18:13:32.819: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 11 18:13:32.820: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5119 bdbe6b1d-1ae2-4dd1-86b3-cc242479ec59 214670 2 2021-01-11 18:13:09 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 04ba48d2-aec0-4646-9c74-372a6a0708f7 0xb7b5de7 0xb7b5de8}] [] [{e2e.test Update apps/v1 2021-01-11 18:13:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-11 18:13:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04ba48d2-aec0-4646-9c74-372a6a0708f7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xb7b5e88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 18:13:32.821: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-5119 9543981a-0e41-4fb6-8f18-88179b0caf54 214624 2 2021-01-11 18:13:16 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 04ba48d2-aec0-4646-9c74-372a6a0708f7 0xb7b5ff7 0xb7b5ff8}] [] [{kube-controller-manager Update apps/v1 2021-01-11 18:13:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04ba48d2-aec0-4646-9c74-372a6a0708f7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xaedc088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 18:13:32.830: INFO: Pod "test-rollover-deployment-668db69979-8prrm" is available: &Pod{ObjectMeta:{test-rollover-deployment-668db69979-8prrm test-rollover-deployment-668db69979- deployment-5119 1f1728b0-611d-4c51-86ad-34bfc76cbab8 214638 0 2021-01-11 18:13:18 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 65a2b954-83f3-471f-bd0f-399722a4a5ac 0xaedc527 0xaedc528}] [] [{kube-controller-manager Update v1 2021-01-11 18:13:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65a2b954-83f3-471f-bd0f-399722a4a5ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 18:13:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.117\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-52x7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-52x7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-52x7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:13:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:13:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:13:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:13:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.117,StartTime:2021-01-11 18:13:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 18:13:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://32430366c9cbe9c490dd2eba8909cf7d0abd3a7e4fb9dc3347e17b27966d9250,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:13:32.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5119" for this suite. • [SLOW TEST:23.293 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":309,"completed":239,"skipped":4156,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:13:32.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9546.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9546.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 18:13:39.219: INFO: DNS probes using dns-9546/dns-test-19ac752b-55de-435f-9d02-77537b46297d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:13:39.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9546" for this suite. • [SLOW TEST:6.616 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":309,"completed":240,"skipped":4170,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:13:39.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 11 18:13:39.714: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:39.751: INFO: Number of nodes with available pods: 0 Jan 11 18:13:39.751: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:40.764: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:40.771: INFO: Number of nodes with available pods: 0 Jan 11 18:13:40.771: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:41.765: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:41.771: INFO: Number of nodes with available pods: 0 Jan 11 18:13:41.771: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:42.763: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:42.770: INFO: Number of nodes with available pods: 0 Jan 11 18:13:42.770: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:43.762: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:43.768: INFO: Number of nodes with available pods: 1 Jan 11 18:13:43.768: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:44.764: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:44.788: INFO: Number of nodes with available pods: 2 Jan 11 18:13:44.788: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 11 18:13:44.824: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:44.837: INFO: Number of nodes with available pods: 1 Jan 11 18:13:44.838: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:45.870: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:45.877: INFO: Number of nodes with available pods: 1 Jan 11 18:13:45.877: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:46.849: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:46.856: INFO: Number of nodes with available pods: 1 Jan 11 18:13:46.856: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:47.849: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:47.872: INFO: Number of nodes with available pods: 1 Jan 11 18:13:47.872: INFO: Node leguer-worker is running more than one daemon pod Jan 11 18:13:48.849: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 11 18:13:48.855: INFO: Number of nodes with available pods: 2 Jan 11 18:13:48.855: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8563, will wait for the garbage collector to delete the pods Jan 11 18:13:48.929: INFO: Deleting DaemonSet.extensions daemon-set took: 10.076769ms Jan 11 18:13:49.530: INFO: Terminating DaemonSet.extensions daemon-set pods took: 601.044292ms Jan 11 18:14:09.835: INFO: Number of nodes with available pods: 0 Jan 11 18:14:09.835: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 18:14:09.840: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"214885"},"items":null} Jan 11 18:14:09.843: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"214885"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:14:09.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8563" for this suite. • [SLOW TEST:30.433 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":309,"completed":241,"skipped":4181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:14:09.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 11 18:14:09.986: INFO: Waiting up to 5m0s for pod "pod-9f79e800-b219-4131-995c-6abeb38c490a" in namespace "emptydir-6080" to be "Succeeded or Failed" Jan 11 18:14:10.021: INFO: Pod "pod-9f79e800-b219-4131-995c-6abeb38c490a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.212724ms Jan 11 18:14:12.030: INFO: Pod "pod-9f79e800-b219-4131-995c-6abeb38c490a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044072496s Jan 11 18:14:14.038: INFO: Pod "pod-9f79e800-b219-4131-995c-6abeb38c490a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051995964s STEP: Saw pod success Jan 11 18:14:14.038: INFO: Pod "pod-9f79e800-b219-4131-995c-6abeb38c490a" satisfied condition "Succeeded or Failed" Jan 11 18:14:14.044: INFO: Trying to get logs from node leguer-worker2 pod pod-9f79e800-b219-4131-995c-6abeb38c490a container test-container: STEP: delete the pod Jan 11 18:14:14.103: INFO: Waiting for pod pod-9f79e800-b219-4131-995c-6abeb38c490a to disappear Jan 11 18:14:14.131: INFO: Pod pod-9f79e800-b219-4131-995c-6abeb38c490a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:14:14.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6080" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":242,"skipped":4204,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:14:14.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service nodeport-test with type=NodePort in namespace services-6215 STEP: creating replication controller nodeport-test in namespace services-6215 I0111 18:14:14.408158 10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6215, replica count: 2 I0111 18:14:17.459813 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 18:14:20.460543 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 18:14:20.460: INFO: Creating new exec pod Jan 11 18:14:25.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-6215 exec execpodwcqzh -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 11 18:14:29.979: INFO: stderr: "I0111 18:14:29.852435 4603 log.go:181] (0x2799030) (0x2799500) Create stream\nI0111 18:14:29.855168 4603 log.go:181] (0x2799030) (0x2799500) Stream added, broadcasting: 1\nI0111 18:14:29.869805 4603 log.go:181] (0x2799030) Reply frame received for 1\nI0111 18:14:29.870351 4603 log.go:181] (0x2799030) (0x289a070) Create stream\nI0111 18:14:29.870422 4603 log.go:181] (0x2799030) (0x289a070) Stream added, broadcasting: 3\nI0111 18:14:29.871603 4603 log.go:181] (0x2799030) Reply frame received for 3\nI0111 18:14:29.871902 4603 log.go:181] (0x2799030) (0x2799d50) Create stream\nI0111 18:14:29.871990 4603 log.go:181] (0x2799030) (0x2799d50) Stream added, broadcasting: 5\nI0111 18:14:29.873287 4603 log.go:181] (0x2799030) Reply frame received for 5\nI0111 18:14:29.958704 4603 log.go:181] (0x2799030) Data frame received for 5\nI0111 18:14:29.959110 4603 log.go:181] (0x2799d50) (5) Data frame handling\nI0111 18:14:29.959533 4603 log.go:181] (0x2799030) Data frame received for 3\nI0111 18:14:29.959659 4603 log.go:181] (0x289a070) (3) Data frame handling\nI0111 18:14:29.960683 4603 log.go:181] (0x2799030) Data frame received for 1\nI0111 18:14:29.960957 4603 log.go:181] (0x2799500) (1) Data frame handling\nI0111 18:14:29.961137 4603 log.go:181] (0x2799d50) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0111 18:14:29.961481 4603 log.go:181] (0x2799500) (1) Data frame sent\nI0111 18:14:29.961982 4603 log.go:181] (0x2799030) Data frame received for 5\nI0111 18:14:29.962179 4603 log.go:181] (0x2799d50) (5) Data frame handling\nI0111 18:14:29.962398 4603 log.go:181] (0x2799d50) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0111 18:14:29.962584 4603 log.go:181] (0x2799030) Data frame received for 5\nI0111 18:14:29.962760 4603 log.go:181] (0x2799d50) (5) Data frame handling\nI0111 18:14:29.963383 4603 log.go:181] (0x2799030) (0x2799500) Stream removed, broadcasting: 1\nI0111 18:14:29.965921 4603 log.go:181] (0x2799030) Go away received\nI0111 18:14:29.968357 4603 log.go:181] (0x2799030) (0x2799500) Stream removed, broadcasting: 1\nI0111 18:14:29.968767 4603 log.go:181] (0x2799030) (0x289a070) Stream removed, broadcasting: 3\nI0111 18:14:29.969485 4603 log.go:181] (0x2799030) (0x2799d50) Stream removed, broadcasting: 5\n" Jan 11 18:14:29.980: INFO: stdout: "" Jan 11 18:14:29.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-6215 exec execpodwcqzh -- /bin/sh -x -c nc -zv -t -w 2 10.96.51.61 80' Jan 11 18:14:31.464: INFO: stderr: "I0111 18:14:31.323469 4624 log.go:181] (0x2712fc0) (0x2713030) Create stream\nI0111 18:14:31.327441 4624 log.go:181] (0x2712fc0) (0x2713030) Stream added, broadcasting: 1\nI0111 18:14:31.345256 4624 log.go:181] (0x2712fc0) Reply frame received for 1\nI0111 18:14:31.345793 4624 log.go:181] (0x2712fc0) (0x265c0e0) Create stream\nI0111 18:14:31.345875 4624 log.go:181] (0x2712fc0) (0x265c0e0) Stream added, broadcasting: 3\nI0111 18:14:31.347484 4624 log.go:181] (0x2712fc0) Reply frame received for 3\nI0111 18:14:31.347860 4624 log.go:181] (0x2712fc0) (0x27120e0) Create stream\nI0111 18:14:31.347964 4624 log.go:181] (0x2712fc0) (0x27120e0) Stream added, broadcasting: 5\nI0111 18:14:31.349243 4624 log.go:181] (0x2712fc0) Reply frame received for 5\nI0111 18:14:31.447601 4624 log.go:181] (0x2712fc0) Data frame received for 5\nI0111 18:14:31.447936 4624 log.go:181] (0x2712fc0) Data frame received for 3\nI0111 18:14:31.448164 4624 log.go:181] (0x265c0e0) (3) Data frame handling\nI0111 18:14:31.448416 4624 log.go:181] (0x27120e0) (5) Data frame handling\nI0111 18:14:31.448776 4624 log.go:181] (0x2712fc0) Data frame received for 1\nI0111 18:14:31.449060 4624 log.go:181] (0x2713030) (1) Data frame handling\n+ nc -zv -t -w 2 10.96.51.61 80\nConnection to 10.96.51.61 80 port [tcp/http] succeeded!\nI0111 18:14:31.451350 4624 log.go:181] (0x2713030) (1) Data frame sent\nI0111 18:14:31.451555 4624 log.go:181] (0x27120e0) (5) Data frame sent\nI0111 18:14:31.451980 4624 log.go:181] (0x2712fc0) Data frame received for 5\nI0111 18:14:31.452055 4624 log.go:181] (0x27120e0) (5) Data frame handling\nI0111 18:14:31.453249 4624 log.go:181] (0x2712fc0) (0x2713030) Stream removed, broadcasting: 1\nI0111 18:14:31.453702 4624 log.go:181] (0x2712fc0) Go away received\nI0111 18:14:31.456280 4624 log.go:181] (0x2712fc0) (0x2713030) Stream removed, broadcasting: 1\nI0111 18:14:31.456483 4624 log.go:181] (0x2712fc0) (0x265c0e0) Stream removed, broadcasting: 3\nI0111 18:14:31.456659 4624 log.go:181] (0x2712fc0) (0x27120e0) Stream removed, broadcasting: 5\n" Jan 11 18:14:31.465: INFO: stdout: "" Jan 11 18:14:31.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-6215 exec execpodwcqzh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30863' Jan 11 18:14:33.022: INFO: stderr: "I0111 18:14:32.924316 4644 log.go:181] (0x28627e0) (0x2863030) Create stream\nI0111 18:14:32.927943 4644 log.go:181] (0x28627e0) (0x2863030) Stream added, broadcasting: 1\nI0111 18:14:32.947739 4644 log.go:181] (0x28627e0) Reply frame received for 1\nI0111 18:14:32.948286 4644 log.go:181] (0x28627e0) (0x2948690) Create stream\nI0111 18:14:32.948367 4644 log.go:181] (0x28627e0) (0x2948690) Stream added, broadcasting: 3\nI0111 18:14:32.949859 4644 log.go:181] (0x28627e0) Reply frame received for 3\nI0111 18:14:32.950115 4644 log.go:181] (0x28627e0) (0x2948a10) Create stream\nI0111 18:14:32.950181 4644 log.go:181] (0x28627e0) (0x2948a10) Stream added, broadcasting: 5\nI0111 18:14:32.951232 4644 log.go:181] (0x28627e0) Reply frame received for 5\nI0111 18:14:33.005180 4644 log.go:181] (0x28627e0) Data frame received for 5\nI0111 18:14:33.005537 4644 log.go:181] (0x2948a10) (5) Data frame handling\nI0111 18:14:33.005672 4644 log.go:181] (0x28627e0) Data frame received for 3\nI0111 18:14:33.005826 4644 log.go:181] (0x28627e0) Data frame received for 1\nI0111 18:14:33.005975 4644 log.go:181] (0x2863030) (1) Data frame handling\nI0111 18:14:33.006071 4644 log.go:181] (0x2948690) (3) Data frame handling\nI0111 18:14:33.007059 4644 log.go:181] (0x2948a10) (5) Data frame sent\nI0111 18:14:33.007445 4644 log.go:181] (0x2863030) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 30863\nConnection to 172.18.0.13 30863 port [tcp/30863] succeeded!\nI0111 18:14:33.008636 4644 log.go:181] (0x28627e0) Data frame received for 5\nI0111 18:14:33.008806 4644 log.go:181] (0x2948a10) (5) Data frame handling\nI0111 18:14:33.010517 4644 log.go:181] (0x28627e0) (0x2863030) Stream removed, broadcasting: 1\nI0111 18:14:33.011800 4644 log.go:181] (0x28627e0) Go away received\nI0111 18:14:33.013919 4644 log.go:181] (0x28627e0) (0x2863030) Stream removed, broadcasting: 1\nI0111 18:14:33.014259 4644 log.go:181] (0x28627e0) (0x2948690) Stream removed, broadcasting: 3\nI0111 18:14:33.014494 4644 log.go:181] (0x28627e0) (0x2948a10) Stream removed, broadcasting: 5\n" Jan 11 18:14:33.023: INFO: stdout: "" Jan 11 18:14:33.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-6215 exec execpodwcqzh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30863' Jan 11 18:14:34.499: INFO: stderr: "I0111 18:14:34.382162 4664 log.go:181] (0x2842690) (0x2842770) Create stream\nI0111 18:14:34.384409 4664 log.go:181] (0x2842690) (0x2842770) Stream added, broadcasting: 1\nI0111 18:14:34.402126 4664 log.go:181] (0x2842690) Reply frame received for 1\nI0111 18:14:34.402630 4664 log.go:181] (0x2842690) (0x2904070) Create stream\nI0111 18:14:34.402690 4664 log.go:181] (0x2842690) (0x2904070) Stream added, broadcasting: 3\nI0111 18:14:34.403999 4664 log.go:181] (0x2842690) Reply frame received for 3\nI0111 18:14:34.404230 4664 log.go:181] (0x2842690) (0x2806070) Create stream\nI0111 18:14:34.404286 4664 log.go:181] (0x2842690) (0x2806070) Stream added, broadcasting: 5\nI0111 18:14:34.405396 4664 log.go:181] (0x2842690) Reply frame received for 5\nI0111 18:14:34.478776 4664 log.go:181] (0x2842690) Data frame received for 3\nI0111 18:14:34.479082 4664 log.go:181] (0x2842690) Data frame received for 5\nI0111 18:14:34.479403 4664 log.go:181] (0x2806070) (5) Data frame handling\nI0111 18:14:34.479622 4664 log.go:181] (0x2904070) (3) Data frame handling\nI0111 18:14:34.479988 4664 log.go:181] (0x2842690) Data frame received for 1\nI0111 18:14:34.480148 4664 log.go:181] (0x2842770) (1) Data frame handling\nI0111 18:14:34.481285 4664 log.go:181] (0x2806070) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30863\nConnection to 172.18.0.12 30863 port [tcp/30863] succeeded!\nI0111 18:14:34.481926 4664 log.go:181] (0x2842770) (1) Data frame sent\nI0111 18:14:34.482346 4664 log.go:181] (0x2842690) Data frame received for 5\nI0111 18:14:34.482495 4664 log.go:181] (0x2806070) (5) Data frame handling\nI0111 18:14:34.483961 4664 log.go:181] (0x2842690) (0x2842770) Stream removed, broadcasting: 1\nI0111 18:14:34.486283 4664 log.go:181] (0x2842690) Go away received\nI0111 18:14:34.491187 4664 log.go:181] (0x2842690) (0x2842770) Stream removed, broadcasting: 1\nI0111 18:14:34.491522 4664 log.go:181] (0x2842690) (0x2904070) Stream removed, broadcasting: 3\nI0111 18:14:34.491788 4664 log.go:181] (0x2842690) (0x2806070) Stream removed, broadcasting: 5\n" Jan 11 18:14:34.499: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:14:34.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6215" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:20.366 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":309,"completed":243,"skipped":4206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:14:34.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:14:34.618: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f9e35989-e47b-407f-97b4-a068ced0bbf9" in namespace "security-context-test-4021" to be "Succeeded or Failed" Jan 11 18:14:34.629: INFO: Pod "busybox-readonly-false-f9e35989-e47b-407f-97b4-a068ced0bbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.898676ms Jan 11 18:14:36.638: INFO: Pod "busybox-readonly-false-f9e35989-e47b-407f-97b4-a068ced0bbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019780341s Jan 11 18:14:38.646: INFO: Pod "busybox-readonly-false-f9e35989-e47b-407f-97b4-a068ced0bbf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027760482s Jan 11 18:14:38.646: INFO: Pod "busybox-readonly-false-f9e35989-e47b-407f-97b4-a068ced0bbf9" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:14:38.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4021" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":309,"completed":244,"skipped":4243,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:14:38.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 18:14:38.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298" in namespace "downward-api-7598" to be "Succeeded or Failed" Jan 11 18:14:38.749: INFO: Pod "downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26129ms Jan 11 18:14:42.608: INFO: Pod "downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298": Phase="Pending", Reason="", readiness=false. Elapsed: 3.864834767s Jan 11 18:14:44.887: INFO: Pod "downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298": Phase="Running", Reason="", readiness=true. Elapsed: 6.14367569s Jan 11 18:14:46.895: INFO: Pod "downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.152659111s STEP: Saw pod success Jan 11 18:14:46.896: INFO: Pod "downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298" satisfied condition "Succeeded or Failed" Jan 11 18:14:46.902: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298 container client-container: STEP: delete the pod Jan 11 18:14:46.950: INFO: Waiting for pod downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298 to disappear Jan 11 18:14:46.964: INFO: Pod downwardapi-volume-a32ef2a6-3c77-44b3-a51f-caf4785b2298 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:14:46.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7598" for this suite. • [SLOW TEST:8.342 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":245,"skipped":4247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:14:47.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating api versions Jan 11 18:14:47.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3516 api-versions' Jan 11 18:14:48.309: INFO: stderr: "" Jan 11 18:14:48.309: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:14:48.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3516" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":309,"completed":246,"skipped":4286,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:14:48.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:14:49.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4309" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":309,"completed":247,"skipped":4289,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:14:49.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 11 18:14:50.117: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 18:15:50.213: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Jan 11 18:15:50.278: INFO: Created pod: pod0-sched-preemption-low-priority Jan 11 18:15:50.340: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:16:18.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-867" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:88.991 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":309,"completed":248,"skipped":4308,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:16:18.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 11 18:16:19.016: INFO: Waiting up to 5m0s for pod "pod-12157300-48b4-4d6a-adec-3b5666ba92aa" in namespace "emptydir-9444" to be "Succeeded or Failed" Jan 11 18:16:19.047: INFO: Pod "pod-12157300-48b4-4d6a-adec-3b5666ba92aa": Phase="Pending", Reason="", readiness=false. Elapsed: 30.709494ms Jan 11 18:16:21.056: INFO: Pod "pod-12157300-48b4-4d6a-adec-3b5666ba92aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039259727s Jan 11 18:16:24.629: INFO: Pod "pod-12157300-48b4-4d6a-adec-3b5666ba92aa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.612887993s Jan 11 18:16:26.636: INFO: Pod "pod-12157300-48b4-4d6a-adec-3b5666ba92aa": Phase="Running", Reason="", readiness=true. Elapsed: 7.619845135s Jan 11 18:16:28.644: INFO: Pod "pod-12157300-48b4-4d6a-adec-3b5666ba92aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.627199201s STEP: Saw pod success Jan 11 18:16:28.644: INFO: Pod "pod-12157300-48b4-4d6a-adec-3b5666ba92aa" satisfied condition "Succeeded or Failed" Jan 11 18:16:28.649: INFO: Trying to get logs from node leguer-worker pod pod-12157300-48b4-4d6a-adec-3b5666ba92aa container test-container: STEP: delete the pod Jan 11 18:16:28.710: INFO: Waiting for pod pod-12157300-48b4-4d6a-adec-3b5666ba92aa to disappear Jan 11 18:16:28.734: INFO: Pod pod-12157300-48b4-4d6a-adec-3b5666ba92aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:16:28.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9444" for this suite. • [SLOW TEST:9.841 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":249,"skipped":4326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:16:28.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 18:16:28.882: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37151761-b395-461e-9a05-acc67f8fa96a" in namespace "projected-5071" to be "Succeeded or Failed" Jan 11 18:16:28.940: INFO: Pod "downwardapi-volume-37151761-b395-461e-9a05-acc67f8fa96a": Phase="Pending", Reason="", readiness=false. Elapsed: 57.818212ms Jan 11 18:16:30.947: INFO: Pod "downwardapi-volume-37151761-b395-461e-9a05-acc67f8fa96a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064487523s Jan 11 18:16:32.970: INFO: Pod "downwardapi-volume-37151761-b395-461e-9a05-acc67f8fa96a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08775138s STEP: Saw pod success Jan 11 18:16:32.970: INFO: Pod "downwardapi-volume-37151761-b395-461e-9a05-acc67f8fa96a" satisfied condition "Succeeded or Failed" Jan 11 18:16:32.979: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-37151761-b395-461e-9a05-acc67f8fa96a container client-container: STEP: delete the pod Jan 11 18:16:33.009: INFO: Waiting for pod downwardapi-volume-37151761-b395-461e-9a05-acc67f8fa96a to disappear Jan 11 18:16:33.020: INFO: Pod downwardapi-volume-37151761-b395-461e-9a05-acc67f8fa96a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:16:33.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5071" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":250,"skipped":4363,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:16:33.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 11 18:16:33.445: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 11 18:16:38.452: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:16:39.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9577" for this suite. • [SLOW TEST:6.227 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":309,"completed":251,"skipped":4391,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:16:39.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-cbe80e65-e718-4022-abcd-c054a35606f8 STEP: Creating a pod to test consume configMaps Jan 11 18:16:39.659: INFO: Waiting up to 5m0s for pod "pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e" in namespace "configmap-1103" to be "Succeeded or Failed" Jan 11 18:16:39.700: INFO: Pod "pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.956329ms Jan 11 18:16:41.707: INFO: Pod "pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047242126s Jan 11 18:16:43.714: INFO: Pod "pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054625666s Jan 11 18:16:45.721: INFO: Pod "pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060886811s STEP: Saw pod success Jan 11 18:16:45.721: INFO: Pod "pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e" satisfied condition "Succeeded or Failed" Jan 11 18:16:45.725: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e container agnhost-container: STEP: delete the pod Jan 11 18:16:45.802: INFO: Waiting for pod pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e to disappear Jan 11 18:16:45.811: INFO: Pod pod-configmaps-d151c94f-fbcf-48af-96fd-34e65b1ae98e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:16:45.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1103" for this suite. • [SLOW TEST:6.344 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":252,"skipped":4411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:16:45.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 11 18:16:45.936: INFO: Waiting up to 5m0s for pod "downward-api-0d0b52bd-3dc2-44aa-9648-d267be1f48d6" in namespace "downward-api-6665" to be "Succeeded or Failed" Jan 11 18:16:45.983: INFO: Pod "downward-api-0d0b52bd-3dc2-44aa-9648-d267be1f48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.748966ms Jan 11 18:16:47.992: INFO: Pod "downward-api-0d0b52bd-3dc2-44aa-9648-d267be1f48d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055209564s Jan 11 18:16:50.001: INFO: Pod "downward-api-0d0b52bd-3dc2-44aa-9648-d267be1f48d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064733282s STEP: Saw pod success Jan 11 18:16:50.002: INFO: Pod "downward-api-0d0b52bd-3dc2-44aa-9648-d267be1f48d6" satisfied condition "Succeeded or Failed" Jan 11 18:16:50.008: INFO: Trying to get logs from node leguer-worker pod downward-api-0d0b52bd-3dc2-44aa-9648-d267be1f48d6 container dapi-container: STEP: delete the pod Jan 11 18:16:50.060: INFO: Waiting for pod downward-api-0d0b52bd-3dc2-44aa-9648-d267be1f48d6 to disappear Jan 11 18:16:50.069: INFO: Pod downward-api-0d0b52bd-3dc2-44aa-9648-d267be1f48d6 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:16:50.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6665" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":309,"completed":253,"skipped":4436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:16:50.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 11 18:16:50.179: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 18:16:50.193: INFO: Waiting for terminating namespaces to be deleted... Jan 11 18:16:50.199: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 11 18:16:50.212: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.212: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 18:16:50.212: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.212: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 11 18:16:50.212: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.212: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 18:16:50.212: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.212: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 18:16:50.212: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.213: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 18:16:50.213: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.213: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 18:16:50.213: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.213: INFO: Container chaos-mesh ready: true, restart count 0 Jan 11 18:16:50.213: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.213: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 18:16:50.213: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.213: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 18:16:50.213: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.213: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:16:50.213: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 11 18:16:50.224: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.224: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 11 18:16:50.224: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 11 18:16:50.225: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 11 18:16:50.225: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 11 18:16:50.225: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 18:16:50.225: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 11 18:16:50.225: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container chaos-daemon ready: true, restart count 0 Jan 11 18:16:50.225: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container kindnet-cni ready: true, restart count 0 Jan 11 18:16:50.225: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:16:50.225: INFO: pod1-sched-preemption-medium-priority from sched-preemption-867 started at 2021-01-11 18:15:56 +0000 UTC (1 container statuses recorded) Jan 11 18:16:50.225: INFO: Container pod1-sched-preemption-medium-priority ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-3kmika18-pdtzv requesting resource cpu=0m on Node leguer-worker Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-3kmika18-pllzg requesting resource cpu=0m on Node leguer-worker Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-4cyi45kq-j5tzz requesting resource cpu=0m on Node leguer-worker Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-4cyi45kq-knr4r requesting resource cpu=0m on Node leguer-worker2 Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-f3hls6a3-57dwc requesting resource cpu=0m on Node leguer-worker Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-f3hls6a3-dwt8n requesting resource cpu=0m on Node leguer-worker2 Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-1y3amfc0-hh9qk requesting resource cpu=0m on Node leguer-worker2 Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-1y3amfc0-lp8st requesting resource cpu=0m on Node leguer-worker Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-9pqmjehi-85slb requesting resource cpu=0m on Node leguer-worker2 Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-9pqmjehi-9zwjj requesting resource cpu=0m on Node leguer-worker Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-vnukxqu0-llj24 requesting resource cpu=0m on Node leguer-worker2 Jan 11 18:16:50.352: INFO: Pod rally-a8f48c6d-vnukxqu0-v85kr requesting resource cpu=0m on Node leguer-worker2 Jan 11 18:16:50.353: INFO: Pod chaos-controller-manager-69c479c674-s796v requesting resource cpu=25m on Node leguer-worker Jan 11 18:16:50.353: INFO: Pod chaos-daemon-ffkg7 requesting resource cpu=0m on Node leguer-worker2 Jan 11 18:16:50.353: INFO: Pod chaos-daemon-lv692 requesting resource cpu=0m on Node leguer-worker Jan 11 18:16:50.353: INFO: Pod kindnet-8wggd requesting resource cpu=100m on Node leguer-worker2 Jan 11 18:16:50.353: INFO: Pod kindnet-psm25 requesting resource cpu=100m on Node leguer-worker Jan 11 18:16:50.353: INFO: Pod kube-proxy-29gxg requesting resource cpu=0m on Node leguer-worker2 Jan 11 18:16:50.353: INFO: Pod kube-proxy-bmbcs requesting resource cpu=0m on Node leguer-worker Jan 11 18:16:50.353: INFO: Pod pod1-sched-preemption-medium-priority requesting resource cpu=0m on Node leguer-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jan 11 18:16:50.353: INFO: Creating a pod which consumes cpu=11112m on Node leguer-worker Jan 11 18:16:50.363: INFO: Creating a pod which consumes cpu=11130m on Node leguer-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c690ed10-987c-4d5c-b61a-a53c98cc7e99.16594045d43281d2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7325/filler-pod-c690ed10-987c-4d5c-b61a-a53c98cc7e99 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c690ed10-987c-4d5c-b61a-a53c98cc7e99.165940464f75d45d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c690ed10-987c-4d5c-b61a-a53c98cc7e99.165940469039d8e8], Reason = [Created], Message = [Created container filler-pod-c690ed10-987c-4d5c-b61a-a53c98cc7e99] STEP: Considering event: Type = [Normal], Name = [filler-pod-c690ed10-987c-4d5c-b61a-a53c98cc7e99.16594046a0ac0c29], Reason = [Started], Message = [Started container filler-pod-c690ed10-987c-4d5c-b61a-a53c98cc7e99] STEP: Considering event: Type = [Normal], Name = [filler-pod-ebbec3c9-c6b0-4cbb-bac9-4d9aa1f6b1ad.16594045d183ac68], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7325/filler-pod-ebbec3c9-c6b0-4cbb-bac9-4d9aa1f6b1ad to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-ebbec3c9-c6b0-4cbb-bac9-4d9aa1f6b1ad.165940461d2b24a2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ebbec3c9-c6b0-4cbb-bac9-4d9aa1f6b1ad.165940465c16fd8b], Reason = [Created], Message = [Created container filler-pod-ebbec3c9-c6b0-4cbb-bac9-4d9aa1f6b1ad] STEP: Considering event: Type = [Normal], Name = [filler-pod-ebbec3c9-c6b0-4cbb-bac9-4d9aa1f6b1ad.16594046737ceebf], Reason = [Started], Message = [Started container filler-pod-ebbec3c9-c6b0-4cbb-bac9-4d9aa1f6b1ad] STEP: Considering event: Type = [Warning], Name = [additional-pod.165940473c6f8d89], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:16:57.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7325" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:7.513 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":309,"completed":254,"skipped":4485,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:16:57.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:16:57.684: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:17:01.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5310" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":309,"completed":255,"skipped":4495,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:17:01.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap that has name configmap-test-emptyKey-2c3a6e41-e6b6-4507-9da7-09c006a3ebd7 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:17:01.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4147" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":309,"completed":256,"skipped":4502,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:17:01.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:17:13.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5107" for this suite. • [SLOW TEST:11.242 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":309,"completed":257,"skipped":4515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:17:13.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 18:17:13.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3473 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 11 18:17:14.595: INFO: stderr: "" Jan 11 18:17:14.595: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jan 11 18:17:14.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3473 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Jan 11 18:17:17.330: INFO: stderr: "" Jan 11 18:17:17.330: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jan 11 18:17:17.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3473 delete pods e2e-test-httpd-pod' Jan 11 18:17:29.818: INFO: stderr: "" Jan 11 18:17:29.818: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:17:29.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3473" for this suite. • [SLOW TEST:16.603 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":309,"completed":258,"skipped":4545,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:17:29.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 11 18:17:29.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51a73030-30f3-4741-ab2e-ec0427c04862" in namespace "downward-api-5188" to be "Succeeded or Failed" Jan 11 18:17:29.971: INFO: Pod "downwardapi-volume-51a73030-30f3-4741-ab2e-ec0427c04862": Phase="Pending", Reason="", readiness=false. Elapsed: 47.952841ms Jan 11 18:17:31.979: INFO: Pod "downwardapi-volume-51a73030-30f3-4741-ab2e-ec0427c04862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056043376s Jan 11 18:17:33.988: INFO: Pod "downwardapi-volume-51a73030-30f3-4741-ab2e-ec0427c04862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065601301s STEP: Saw pod success Jan 11 18:17:33.989: INFO: Pod "downwardapi-volume-51a73030-30f3-4741-ab2e-ec0427c04862" satisfied condition "Succeeded or Failed" Jan 11 18:17:33.995: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-51a73030-30f3-4741-ab2e-ec0427c04862 container client-container: STEP: delete the pod Jan 11 18:17:34.061: INFO: Waiting for pod downwardapi-volume-51a73030-30f3-4741-ab2e-ec0427c04862 to disappear Jan 11 18:17:34.161: INFO: Pod downwardapi-volume-51a73030-30f3-4741-ab2e-ec0427c04862 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:17:34.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5188" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":259,"skipped":4564,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:17:34.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-193ca914-e691-4de6-ac43-c0ca20285b26 STEP: Creating a pod to test consume configMaps Jan 11 18:17:34.302: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2c1aa24-177b-4bb7-825f-1c1939175889" in namespace "configmap-8457" to be "Succeeded or Failed" Jan 11 18:17:34.323: INFO: Pod "pod-configmaps-a2c1aa24-177b-4bb7-825f-1c1939175889": Phase="Pending", Reason="", readiness=false. Elapsed: 21.070616ms Jan 11 18:17:36.425: INFO: Pod "pod-configmaps-a2c1aa24-177b-4bb7-825f-1c1939175889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123185261s Jan 11 18:17:38.433: INFO: Pod "pod-configmaps-a2c1aa24-177b-4bb7-825f-1c1939175889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131124971s STEP: Saw pod success Jan 11 18:17:38.434: INFO: Pod "pod-configmaps-a2c1aa24-177b-4bb7-825f-1c1939175889" satisfied condition "Succeeded or Failed" Jan 11 18:17:38.444: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-a2c1aa24-177b-4bb7-825f-1c1939175889 container configmap-volume-test: STEP: delete the pod Jan 11 18:17:38.477: INFO: Waiting for pod pod-configmaps-a2c1aa24-177b-4bb7-825f-1c1939175889 to disappear Jan 11 18:17:38.491: INFO: Pod pod-configmaps-a2c1aa24-177b-4bb7-825f-1c1939175889 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:17:38.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8457" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":309,"completed":260,"skipped":4574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:17:38.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-4287 STEP: creating service affinity-clusterip-transition in namespace services-4287 STEP: creating replication controller affinity-clusterip-transition in namespace services-4287 I0111 18:17:38.606270 10 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-4287, replica count: 3 I0111 18:17:41.657774 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 18:17:44.658992 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 18:17:44.670: INFO: Creating new exec pod Jan 11 18:17:49.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4287 exec execpod-affinity4rdqd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 11 18:17:51.222: INFO: stderr: "I0111 18:17:51.096134 4765 log.go:181] (0x24ee000) (0x24ee070) Create stream\nI0111 18:17:51.101558 4765 log.go:181] (0x24ee000) (0x24ee070) Stream added, broadcasting: 1\nI0111 18:17:51.113093 4765 log.go:181] (0x24ee000) Reply frame received for 1\nI0111 18:17:51.113606 4765 log.go:181] (0x24ee000) (0x2ac80e0) Create stream\nI0111 18:17:51.113683 4765 log.go:181] (0x24ee000) (0x2ac80e0) Stream added, broadcasting: 3\nI0111 18:17:51.115134 4765 log.go:181] (0x24ee000) Reply frame received for 3\nI0111 18:17:51.115419 4765 log.go:181] (0x24ee000) (0x250eaf0) Create stream\nI0111 18:17:51.115516 4765 log.go:181] (0x24ee000) (0x250eaf0) Stream added, broadcasting: 5\nI0111 18:17:51.116935 4765 log.go:181] (0x24ee000) Reply frame received for 5\nI0111 18:17:51.202686 4765 log.go:181] (0x24ee000) Data frame received for 5\nI0111 18:17:51.203012 4765 log.go:181] (0x250eaf0) (5) Data frame handling\nI0111 18:17:51.203242 4765 log.go:181] (0x24ee000) Data frame received for 3\nI0111 18:17:51.203403 4765 log.go:181] (0x2ac80e0) (3) Data frame handling\nI0111 18:17:51.203568 4765 log.go:181] (0x24ee000) Data frame received for 1\nI0111 18:17:51.203805 4765 log.go:181] (0x24ee070) (1) Data frame handling\nI0111 18:17:51.204010 4765 log.go:181] (0x250eaf0) (5) Data frame sent\nI0111 18:17:51.204313 4765 log.go:181] (0x24ee070) (1) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0111 18:17:51.204733 4765 log.go:181] (0x24ee000) Data frame received for 5\nI0111 18:17:51.204969 4765 log.go:181] (0x250eaf0) (5) Data frame handling\nI0111 18:17:51.206921 4765 log.go:181] (0x24ee000) (0x24ee070) Stream removed, broadcasting: 1\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0111 18:17:51.207938 4765 log.go:181] (0x250eaf0) (5) Data frame sent\nI0111 18:17:51.208124 4765 log.go:181] (0x24ee000) Data frame received for 5\nI0111 18:17:51.208265 4765 log.go:181] (0x250eaf0) (5) Data frame handling\nI0111 18:17:51.210269 4765 log.go:181] (0x24ee000) Go away received\nI0111 18:17:51.213842 4765 log.go:181] (0x24ee000) (0x24ee070) Stream removed, broadcasting: 1\nI0111 18:17:51.214169 4765 log.go:181] (0x24ee000) (0x2ac80e0) Stream removed, broadcasting: 3\nI0111 18:17:51.214380 4765 log.go:181] (0x24ee000) (0x250eaf0) Stream removed, broadcasting: 5\n" Jan 11 18:17:51.223: INFO: stdout: "" Jan 11 18:17:51.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4287 exec execpod-affinity4rdqd -- /bin/sh -x -c nc -zv -t -w 2 10.96.73.149 80' Jan 11 18:17:52.741: INFO: stderr: "I0111 18:17:52.625976 4785 log.go:181] (0x2808000) (0x2808070) Create stream\nI0111 18:17:52.639939 4785 log.go:181] (0x2808000) (0x2808070) Stream added, broadcasting: 1\nI0111 18:17:52.650510 4785 log.go:181] (0x2808000) Reply frame received for 1\nI0111 18:17:52.651026 4785 log.go:181] (0x2808000) (0x2fa61c0) Create stream\nI0111 18:17:52.651096 4785 log.go:181] (0x2808000) (0x2fa61c0) Stream added, broadcasting: 3\nI0111 18:17:52.652325 4785 log.go:181] (0x2808000) Reply frame received for 3\nI0111 18:17:52.652574 4785 log.go:181] (0x2808000) (0x2fa6380) Create stream\nI0111 18:17:52.652655 4785 log.go:181] (0x2808000) (0x2fa6380) Stream added, broadcasting: 5\nI0111 18:17:52.653919 4785 log.go:181] (0x2808000) Reply frame received for 5\nI0111 18:17:52.722512 4785 log.go:181] (0x2808000) Data frame received for 5\nI0111 18:17:52.722941 4785 log.go:181] (0x2808000) Data frame received for 3\nI0111 18:17:52.723296 4785 log.go:181] (0x2fa61c0) (3) Data frame handling\nI0111 18:17:52.723511 4785 log.go:181] (0x2fa6380) (5) Data frame handling\nI0111 18:17:52.723716 4785 log.go:181] (0x2808000) Data frame received for 1\nI0111 18:17:52.723870 4785 log.go:181] (0x2808070) (1) Data frame handling\nI0111 18:17:52.725899 4785 log.go:181] (0x2fa6380) (5) Data frame sent\nI0111 18:17:52.726321 4785 log.go:181] (0x2808070) (1) Data frame sent\nI0111 18:17:52.726735 4785 log.go:181] (0x2808000) Data frame received for 5\nI0111 18:17:52.726858 4785 log.go:181] (0x2fa6380) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.73.149 80\nConnection to 10.96.73.149 80 port [tcp/http] succeeded!\nI0111 18:17:52.729333 4785 log.go:181] (0x2808000) (0x2808070) Stream removed, broadcasting: 1\nI0111 18:17:52.729783 4785 log.go:181] (0x2808000) Go away received\nI0111 18:17:52.733052 4785 log.go:181] (0x2808000) (0x2808070) Stream removed, broadcasting: 1\nI0111 18:17:52.733258 4785 log.go:181] (0x2808000) (0x2fa61c0) Stream removed, broadcasting: 3\nI0111 18:17:52.733410 4785 log.go:181] (0x2808000) (0x2fa6380) Stream removed, broadcasting: 5\n" Jan 11 18:17:52.741: INFO: stdout: "" Jan 11 18:17:52.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4287 exec execpod-affinity4rdqd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.73.149:80/ ; done' Jan 11 18:17:54.326: INFO: stderr: "I0111 18:17:54.079736 4806 log.go:181] (0x2eac5b0) (0x2eac620) Create stream\nI0111 18:17:54.082758 4806 log.go:181] (0x2eac5b0) (0x2eac620) Stream added, broadcasting: 1\nI0111 18:17:54.099332 4806 log.go:181] (0x2eac5b0) Reply frame received for 1\nI0111 18:17:54.099771 4806 log.go:181] (0x2eac5b0) (0x2eac070) Create stream\nI0111 18:17:54.099837 4806 log.go:181] (0x2eac5b0) (0x2eac070) Stream added, broadcasting: 3\nI0111 18:17:54.101262 4806 log.go:181] (0x2eac5b0) Reply frame received for 3\nI0111 18:17:54.101548 4806 log.go:181] (0x2eac5b0) (0x247d8f0) Create stream\nI0111 18:17:54.101630 4806 log.go:181] (0x2eac5b0) (0x247d8f0) Stream added, broadcasting: 5\nI0111 18:17:54.102656 4806 log.go:181] (0x2eac5b0) Reply frame received for 5\nI0111 18:17:54.196474 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.197108 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.197380 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.197507 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.198298 4806 log.go:181] (0x2eac070) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.198945 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.202423 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.202664 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.202862 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.203341 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.203486 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.203645 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.203866 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.204041 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.204195 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.211158 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.211275 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.211442 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.212321 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.212461 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.212613 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.212745 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.212929 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.213042 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.219457 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.219600 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.219748 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.220232 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.220458 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.220698 4806 log.go:181] (0x247d8f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.220964 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.221151 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.221353 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.227533 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.227671 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.227807 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.228352 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.228529 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.228667 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.228820 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.229254 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.229412 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.235120 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.235296 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.235472 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.235623 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.235750 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.235934 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.236088 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.236248 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.236362 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.241273 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.241428 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.241611 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.241944 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.242063 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.242154 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.242319 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.242424 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.242530 4806 log.go:181] (0x247d8f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.247132 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.247300 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.247441 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.247857 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.247983 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.248083 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.248177 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.248271 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.248394 4806 log.go:181] (0x247d8f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.254951 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.255080 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.255257 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.255416 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.255551 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.255684 4806 log.go:181] (0x247d8f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.255798 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.255912 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.256049 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.261073 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.261282 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.261454 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.261671 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.261839 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.262178 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.262371 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.262509 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.262647 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.268604 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.268714 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.268917 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.269405 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.269521 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.269602 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.269690 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.269772 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.269856 4806 log.go:181] (0x247d8f0) (5) Data frame sent\n+ echo\n+ curl -qI0111 18:17:54.269924 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.270052 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.270171 4806 log.go:181] (0x247d8f0) (5) Data frame sent\n -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.275639 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.275829 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.276020 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.276301 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.276416 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.276504 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.276612 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.276703 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.276800 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.282200 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.282326 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.282462 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.282775 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.282911 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.283128 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.283277 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.283405 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.283546 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.288258 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.288409 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.288572 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.288793 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.289043 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.289180 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.289293 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.289386 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.289543 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.293991 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.294111 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.294239 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.295129 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.295290 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.295420 4806 log.go:181] (0x247d8f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.295590 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.295723 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.295855 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.299894 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.300002 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.300114 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.300801 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.301041 4806 log.go:181] (0x247d8f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:54.301174 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.301323 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.301439 4806 log.go:181] (0x247d8f0) (5) Data frame sent\nI0111 18:17:54.301562 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.306398 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.306491 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.306608 4806 log.go:181] (0x2eac070) (3) Data frame sent\nI0111 18:17:54.307042 4806 log.go:181] (0x2eac5b0) Data frame received for 3\nI0111 18:17:54.307208 4806 log.go:181] (0x2eac070) (3) Data frame handling\nI0111 18:17:54.307451 4806 log.go:181] (0x2eac5b0) Data frame received for 5\nI0111 18:17:54.307588 4806 log.go:181] (0x247d8f0) (5) Data frame handling\nI0111 18:17:54.309417 4806 log.go:181] (0x2eac5b0) Data frame received for 1\nI0111 18:17:54.309508 4806 log.go:181] (0x2eac620) (1) Data frame handling\nI0111 18:17:54.309648 4806 log.go:181] (0x2eac620) (1) Data frame sent\nI0111 18:17:54.310646 4806 log.go:181] (0x2eac5b0) (0x2eac620) Stream removed, broadcasting: 1\nI0111 18:17:54.313317 4806 log.go:181] (0x2eac5b0) Go away received\nI0111 18:17:54.317159 4806 log.go:181] (0x2eac5b0) (0x2eac620) Stream removed, broadcasting: 1\nI0111 18:17:54.317401 4806 log.go:181] (0x2eac5b0) (0x2eac070) Stream removed, broadcasting: 3\nI0111 18:17:54.317614 4806 log.go:181] (0x2eac5b0) (0x247d8f0) Stream removed, broadcasting: 5\n" Jan 11 18:17:54.332: INFO: stdout: "\naffinity-clusterip-transition-wkgnm\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-wkgnm\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-wkgnm\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-wkgnm\naffinity-clusterip-transition-d6fb6\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn" Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-wkgnm Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-wkgnm Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-wkgnm Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-wkgnm Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-d6fb6 Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:54.333: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:54.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4287 exec execpod-affinity4rdqd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.73.149:80/ ; done' Jan 11 18:17:55.936: INFO: stderr: "I0111 18:17:55.697126 4826 log.go:181] (0x2ae20e0) (0x2ae2150) Create stream\nI0111 18:17:55.700326 4826 log.go:181] (0x2ae20e0) (0x2ae2150) Stream added, broadcasting: 1\nI0111 18:17:55.718987 4826 log.go:181] (0x2ae20e0) Reply frame received for 1\nI0111 18:17:55.719441 4826 log.go:181] (0x2ae20e0) (0x273f260) Create stream\nI0111 18:17:55.719511 4826 log.go:181] (0x2ae20e0) (0x273f260) Stream added, broadcasting: 3\nI0111 18:17:55.720686 4826 log.go:181] (0x2ae20e0) Reply frame received for 3\nI0111 18:17:55.720957 4826 log.go:181] (0x2ae20e0) (0x269c0e0) Create stream\nI0111 18:17:55.721022 4826 log.go:181] (0x2ae20e0) (0x269c0e0) Stream added, broadcasting: 5\nI0111 18:17:55.722378 4826 log.go:181] (0x2ae20e0) Reply frame received for 5\nI0111 18:17:55.822335 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.822573 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.822710 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.822885 4826 log.go:181] (0x269c0e0) (5) Data frame handling\nI0111 18:17:55.823109 4826 log.go:181] (0x269c0e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.823486 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.824557 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.824664 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.824772 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.825045 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.825154 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.825269 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.825357 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.825448 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.825556 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.832991 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.833086 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.833186 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.834150 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.834250 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.834350 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.834467 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.834567 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.834705 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.840631 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.840778 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.841006 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.841098 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.841195 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.841353 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.841465 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.841537 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.841623 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.845749 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.845876 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.846007 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.846312 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.846441 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.846566 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.846676 4826 log.go:181] (0x269c0e0) (5) Data frame handling\nI0111 18:17:55.846781 4826 log.go:181] (0x269c0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.846883 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.851425 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.851560 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.851678 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.853459 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.853587 4826 log.go:181] (0x269c0e0) (5) Data frame handling\nI0111 18:17:55.853693 4826 log.go:181] (0x269c0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.853790 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.853873 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.853976 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.856489 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.856695 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.856991 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.857400 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.857557 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.857665 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.857868 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.858011 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.858139 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.863198 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.863325 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.863456 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.863618 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.863727 4826 log.go:181] (0x269c0e0) (5) Data frame handling\nI0111 18:17:55.863842 4826 log.go:181] (0x269c0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.864000 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.864100 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.864220 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.868786 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.869000 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.869106 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.869692 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.869821 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.869950 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.870382 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.870514 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.870628 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.873247 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.873349 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.873455 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.874083 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.874207 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.874302 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.874400 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.874479 4826 log.go:181] (0x269c0e0) (5) Data frame handling\nI0111 18:17:55.874593 4826 log.go:181] (0x269c0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.880042 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.880161 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.880339 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.880927 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.881070 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.881181 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.881322 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.881428 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.881526 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.887336 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.887434 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.887535 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.887935 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.888025 4826 log.go:181] (0x269c0e0) (5) Data frame handling\nI0111 18:17:55.888093 4826 log.go:181] (0x269c0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.888157 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.888279 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.888392 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.894006 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.894142 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.894329 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.895099 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.895237 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.895335 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.895453 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.895615 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.895787 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.900118 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.900249 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.900364 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.901604 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.901822 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.902034 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.902179 4826 log.go:181] (0x269c0e0) (5) Data frame handling\nI0111 18:17:55.902385 4826 log.go:181] (0x269c0e0) (5) Data frame sent\n+ echo\nI0111 18:17:55.902547 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.902809 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.903034 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.903210 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.908065 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.908178 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.908333 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.909261 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.909404 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.909550 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.909706 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.909873 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.910019 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.915513 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.915649 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.915760 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.915873 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.916005 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.916193 4826 log.go:181] (0x269c0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.73.149:80/\nI0111 18:17:55.916273 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.916396 4826 log.go:181] (0x269c0e0) (5) Data frame sent\nI0111 18:17:55.916507 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.920693 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.920798 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.920977 4826 log.go:181] (0x273f260) (3) Data frame sent\nI0111 18:17:55.921570 4826 log.go:181] (0x2ae20e0) Data frame received for 3\nI0111 18:17:55.921689 4826 log.go:181] (0x273f260) (3) Data frame handling\nI0111 18:17:55.921762 4826 log.go:181] (0x2ae20e0) Data frame received for 5\nI0111 18:17:55.921847 4826 log.go:181] (0x269c0e0) (5) Data frame handling\nI0111 18:17:55.923074 4826 log.go:181] (0x2ae20e0) Data frame received for 1\nI0111 18:17:55.923189 4826 log.go:181] (0x2ae2150) (1) Data frame handling\nI0111 18:17:55.923299 4826 log.go:181] (0x2ae2150) (1) Data frame sent\nI0111 18:17:55.923800 4826 log.go:181] (0x2ae20e0) (0x2ae2150) Stream removed, broadcasting: 1\nI0111 18:17:55.925142 4826 log.go:181] (0x2ae20e0) Go away received\nI0111 18:17:55.927736 4826 log.go:181] (0x2ae20e0) (0x2ae2150) Stream removed, broadcasting: 1\nI0111 18:17:55.927970 4826 log.go:181] (0x2ae20e0) (0x273f260) Stream removed, broadcasting: 3\nI0111 18:17:55.928153 4826 log.go:181] (0x2ae20e0) (0x269c0e0) Stream removed, broadcasting: 5\n" Jan 11 18:17:55.940: INFO: stdout: "\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn\naffinity-clusterip-transition-rh6rn" Jan 11 18:17:55.940: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.940: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.940: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.940: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.940: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.940: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.940: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.940: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Received response from host: affinity-clusterip-transition-rh6rn Jan 11 18:17:55.941: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4287, will wait for the garbage collector to delete the pods Jan 11 18:17:56.388: INFO: Deleting ReplicationController affinity-clusterip-transition took: 312.981421ms Jan 11 18:17:56.989: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.924628ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:18:10.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4287" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:31.546 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":261,"skipped":4599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:18:10.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 11 18:18:15.246: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:18:15.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2769" for this suite. • [SLOW TEST:5.342 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":309,"completed":262,"skipped":4669,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:18:15.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:18:19.674: INFO: Deleting pod "var-expansion-376abb3c-832c-406f-915d-77357e49ffa2" in namespace "var-expansion-5652" Jan 11 18:18:19.682: INFO: Wait up to 5m0s for pod "var-expansion-376abb3c-832c-406f-915d-77357e49ffa2" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:19:11.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5652" for this suite. • [SLOW TEST:56.361 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":309,"completed":263,"skipped":4672,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:19:11.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD Jan 11 18:19:11.871: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:21:16.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8502" for this suite. • [SLOW TEST:125.004 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":309,"completed":264,"skipped":4706,"failed":0} [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:21:16.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting the proxy server Jan 11 18:21:16.934: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2899 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:21:18.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2899" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":309,"completed":265,"skipped":4706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:21:18.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 11 18:21:27.378: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 11 18:21:29.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986087, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986087, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986087, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986087, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 18:21:32.452: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 18:21:32.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:21:33.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3863" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.850 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":309,"completed":266,"skipped":4741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:21:33.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 18:21:34.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6952 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Jan 11 18:21:35.288: INFO: stderr: "" Jan 11 18:21:35.288: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Jan 11 18:21:35.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6952 delete pods e2e-test-httpd-pod' Jan 11 18:22:30.110: INFO: stderr: "" Jan 11 18:22:30.110: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:22:30.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6952" for this suite. • [SLOW TEST:56.200 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":309,"completed":267,"skipped":4778,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:22:30.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override command Jan 11 18:22:30.256: INFO: Waiting up to 5m0s for pod "client-containers-ad5fd49a-ea6d-4bb6-a8e8-18d1681f2e53" in namespace "containers-8169" to be "Succeeded or Failed" Jan 11 18:22:30.266: INFO: Pod "client-containers-ad5fd49a-ea6d-4bb6-a8e8-18d1681f2e53": Phase="Pending", Reason="", readiness=false. Elapsed: 9.713859ms Jan 11 18:22:32.275: INFO: Pod "client-containers-ad5fd49a-ea6d-4bb6-a8e8-18d1681f2e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018369337s Jan 11 18:22:34.304: INFO: Pod "client-containers-ad5fd49a-ea6d-4bb6-a8e8-18d1681f2e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047270866s STEP: Saw pod success Jan 11 18:22:34.304: INFO: Pod "client-containers-ad5fd49a-ea6d-4bb6-a8e8-18d1681f2e53" satisfied condition "Succeeded or Failed" Jan 11 18:22:34.310: INFO: Trying to get logs from node leguer-worker2 pod client-containers-ad5fd49a-ea6d-4bb6-a8e8-18d1681f2e53 container agnhost-container: STEP: delete the pod Jan 11 18:22:34.352: INFO: Waiting for pod client-containers-ad5fd49a-ea6d-4bb6-a8e8-18d1681f2e53 to disappear Jan 11 18:22:34.387: INFO: Pod client-containers-ad5fd49a-ea6d-4bb6-a8e8-18d1681f2e53 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:22:34.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8169" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":309,"completed":268,"skipped":4790,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:22:34.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 18:23:03.567: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 18:23:05.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986183, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986183, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986183, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986183, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 18:23:07.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986183, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986183, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986183, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986183, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 18:23:10.628: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:23:20.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8917" for this suite. STEP: Destroying namespace "webhook-8917-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:46.651 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":309,"completed":269,"skipped":4793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:23:21.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0111 18:23:31.211871 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 18:24:33.243: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:24:33.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3559" for this suite. • [SLOW TEST:72.201 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":309,"completed":270,"skipped":4817,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:24:33.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:24:37.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9260" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":271,"skipped":4821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:24:37.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:24:37.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8141" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":309,"completed":272,"skipped":4844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:24:37.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 11 18:24:37.827: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5123 3fd089bc-da97-4a77-ab07-b0ffdbd4adb7 217227 0 2021-01-11 18:24:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-11 18:24:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 18:24:37.828: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5123 3fd089bc-da97-4a77-ab07-b0ffdbd4adb7 217228 0 2021-01-11 18:24:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-11 18:24:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 11 18:24:37.853: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5123 3fd089bc-da97-4a77-ab07-b0ffdbd4adb7 217229 0 2021-01-11 18:24:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-11 18:24:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 18:24:37.854: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5123 3fd089bc-da97-4a77-ab07-b0ffdbd4adb7 217230 0 2021-01-11 18:24:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-11 18:24:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:24:37.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5123" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":309,"completed":273,"skipped":4882,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:24:37.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-4930b8b5-3290-467f-9a3e-3634c68d5bae in namespace container-probe-4217 Jan 11 18:24:42.027: INFO: Started pod liveness-4930b8b5-3290-467f-9a3e-3634c68d5bae in namespace container-probe-4217 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 18:24:42.032: INFO: Initial restart count of pod liveness-4930b8b5-3290-467f-9a3e-3634c68d5bae is 0 Jan 11 18:25:00.135: INFO: Restart count of pod container-probe-4217/liveness-4930b8b5-3290-467f-9a3e-3634c68d5bae is now 1 (18.102489045s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:25:00.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4217" for this suite. • [SLOW TEST:22.308 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":309,"completed":274,"skipped":4887,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:25:00.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-813bd97b-03d4-4302-a827-785cd28580ff STEP: Creating a pod to test consume secrets Jan 11 18:25:00.291: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb831b73-eca8-4a65-83fc-98d9d0bc6a61" in namespace "projected-8765" to be "Succeeded or Failed" Jan 11 18:25:00.641: INFO: Pod "pod-projected-secrets-eb831b73-eca8-4a65-83fc-98d9d0bc6a61": Phase="Pending", Reason="", readiness=false. Elapsed: 350.301227ms Jan 11 18:25:02.650: INFO: Pod "pod-projected-secrets-eb831b73-eca8-4a65-83fc-98d9d0bc6a61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358953475s Jan 11 18:25:04.658: INFO: Pod "pod-projected-secrets-eb831b73-eca8-4a65-83fc-98d9d0bc6a61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.367530509s STEP: Saw pod success Jan 11 18:25:04.659: INFO: Pod "pod-projected-secrets-eb831b73-eca8-4a65-83fc-98d9d0bc6a61" satisfied condition "Succeeded or Failed" Jan 11 18:25:04.665: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-eb831b73-eca8-4a65-83fc-98d9d0bc6a61 container projected-secret-volume-test: STEP: delete the pod Jan 11 18:25:04.717: INFO: Waiting for pod pod-projected-secrets-eb831b73-eca8-4a65-83fc-98d9d0bc6a61 to disappear Jan 11 18:25:04.727: INFO: Pod pod-projected-secrets-eb831b73-eca8-4a65-83fc-98d9d0bc6a61 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:25:04.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8765" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":275,"skipped":4889,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:25:04.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-9770f533-4e71-4235-bc52-985bad10c886 STEP: Creating a pod to test consume configMaps Jan 11 18:25:04.916: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0ae2af9-b42d-455d-ab0d-1b5a4073a453" in namespace "configmap-6016" to be "Succeeded or Failed" Jan 11 18:25:04.989: INFO: Pod "pod-configmaps-d0ae2af9-b42d-455d-ab0d-1b5a4073a453": Phase="Pending", Reason="", readiness=false. Elapsed: 73.300933ms Jan 11 18:25:07.105: INFO: Pod "pod-configmaps-d0ae2af9-b42d-455d-ab0d-1b5a4073a453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188823526s Jan 11 18:25:09.372: INFO: Pod "pod-configmaps-d0ae2af9-b42d-455d-ab0d-1b5a4073a453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.455857195s STEP: Saw pod success Jan 11 18:25:09.372: INFO: Pod "pod-configmaps-d0ae2af9-b42d-455d-ab0d-1b5a4073a453" satisfied condition "Succeeded or Failed" Jan 11 18:25:09.400: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-d0ae2af9-b42d-455d-ab0d-1b5a4073a453 container agnhost-container: STEP: delete the pod Jan 11 18:25:09.457: INFO: Waiting for pod pod-configmaps-d0ae2af9-b42d-455d-ab0d-1b5a4073a453 to disappear Jan 11 18:25:09.463: INFO: Pod pod-configmaps-d0ae2af9-b42d-455d-ab0d-1b5a4073a453 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:25:09.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6016" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":276,"skipped":4902,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:25:09.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Request ServerVersion STEP: Confirm major version Jan 11 18:25:09.682: INFO: Major version: 1 STEP: Confirm minor version Jan 11 18:25:09.682: INFO: cleanMinorVersion: 20 Jan 11 18:25:09.682: INFO: Minor version: 20 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:25:09.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-872" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":309,"completed":277,"skipped":4936,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:25:09.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:25:14.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9764" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":309,"completed":278,"skipped":4942,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:25:14.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 11 18:25:14.450: INFO: Waiting up to 5m0s for pod "pod-182058dd-d27d-4929-a7ce-904a771a21e4" in namespace "emptydir-4526" to be "Succeeded or Failed" Jan 11 18:25:14.485: INFO: Pod "pod-182058dd-d27d-4929-a7ce-904a771a21e4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.921241ms Jan 11 18:25:16.558: INFO: Pod "pod-182058dd-d27d-4929-a7ce-904a771a21e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107712246s Jan 11 18:25:18.573: INFO: Pod "pod-182058dd-d27d-4929-a7ce-904a771a21e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123279097s STEP: Saw pod success Jan 11 18:25:18.573: INFO: Pod "pod-182058dd-d27d-4929-a7ce-904a771a21e4" satisfied condition "Succeeded or Failed" Jan 11 18:25:18.580: INFO: Trying to get logs from node leguer-worker pod pod-182058dd-d27d-4929-a7ce-904a771a21e4 container test-container: STEP: delete the pod Jan 11 18:25:18.641: INFO: Waiting for pod pod-182058dd-d27d-4929-a7ce-904a771a21e4 to disappear Jan 11 18:25:18.670: INFO: Pod pod-182058dd-d27d-4929-a7ce-904a771a21e4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:25:18.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4526" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":279,"skipped":4954,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:25:18.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 11 18:25:23.368: INFO: Successfully updated pod "pod-update-5a2bc6dc-c79f-428a-8f78-9389ea5952d9" STEP: verifying the updated pod is in kubernetes Jan 11 18:25:23.450: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:25:23.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5908" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":309,"completed":280,"skipped":4959,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:25:23.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 11 18:25:28.139: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fd36fdb3-afbd-448e-9db1-ab43e78de2ee" Jan 11 18:25:28.139: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fd36fdb3-afbd-448e-9db1-ab43e78de2ee" in namespace "pods-1298" to be "terminated due to deadline exceeded" Jan 11 18:25:28.199: INFO: Pod "pod-update-activedeadlineseconds-fd36fdb3-afbd-448e-9db1-ab43e78de2ee": Phase="Running", Reason="", readiness=true. Elapsed: 59.220578ms Jan 11 18:25:30.234: INFO: Pod "pod-update-activedeadlineseconds-fd36fdb3-afbd-448e-9db1-ab43e78de2ee": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.094492398s Jan 11 18:25:30.234: INFO: Pod "pod-update-activedeadlineseconds-fd36fdb3-afbd-448e-9db1-ab43e78de2ee" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:25:30.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1298" for this suite. • [SLOW TEST:6.787 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":309,"completed":281,"skipped":4967,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:25:30.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-e859de7b-0ff2-4af5-a10b-a573e9a86289 in namespace container-probe-6334 Jan 11 18:25:34.380: INFO: Started pod busybox-e859de7b-0ff2-4af5-a10b-a573e9a86289 in namespace container-probe-6334 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 18:25:34.385: INFO: Initial restart count of pod busybox-e859de7b-0ff2-4af5-a10b-a573e9a86289 is 0 Jan 11 18:26:25.205: INFO: Restart count of pod container-probe-6334/busybox-e859de7b-0ff2-4af5-a10b-a573e9a86289 is now 1 (50.819139177s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:26:25.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6334" for this suite. • [SLOW TEST:55.028 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":309,"completed":282,"skipped":5014,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:26:25.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 11 18:26:25.467: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8689 e3f26bbe-156b-4c6c-aa2e-879ce7df5c25 217813 0 2021-01-11 18:26:25 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-01-11 18:26:25 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7zlp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7zlp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7zlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 18:26:25.478: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 11 18:26:27.485: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 11 18:26:29.487: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jan 11 18:26:29.487: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8689 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 18:26:29.487: INFO: >>> kubeConfig: /root/.kube/config I0111 18:26:29.594129 10 log.go:181] (0x8e882a0) (0x8e88380) Create stream I0111 18:26:29.594331 10 log.go:181] (0x8e882a0) (0x8e88380) Stream added, broadcasting: 1 I0111 18:26:29.599324 10 log.go:181] (0x8e882a0) Reply frame received for 1 I0111 18:26:29.599599 10 log.go:181] (0x8e882a0) (0x8f664d0) Create stream I0111 18:26:29.599724 10 log.go:181] (0x8e882a0) (0x8f664d0) Stream added, broadcasting: 3 I0111 18:26:29.601820 10 log.go:181] (0x8e882a0) Reply frame received for 3 I0111 18:26:29.602011 10 log.go:181] (0x8e882a0) (0x76411f0) Create stream I0111 18:26:29.602104 10 log.go:181] (0x8e882a0) (0x76411f0) Stream added, broadcasting: 5 I0111 18:26:29.603679 10 log.go:181] (0x8e882a0) Reply frame received for 5 I0111 18:26:29.706268 10 log.go:181] (0x8e882a0) Data frame received for 3 I0111 18:26:29.706384 10 log.go:181] (0x8f664d0) (3) Data frame handling I0111 18:26:29.706481 10 log.go:181] (0x8f664d0) (3) Data frame sent I0111 18:26:29.707462 10 log.go:181] (0x8e882a0) Data frame received for 5 I0111 18:26:29.707586 10 log.go:181] (0x76411f0) (5) Data frame handling I0111 18:26:29.707693 10 log.go:181] (0x8e882a0) Data frame received for 3 I0111 18:26:29.707788 10 log.go:181] (0x8f664d0) (3) Data frame handling I0111 18:26:29.709758 10 log.go:181] (0x8e882a0) Data frame received for 1 I0111 18:26:29.709839 10 log.go:181] (0x8e88380) (1) Data frame handling I0111 18:26:29.709928 10 log.go:181] (0x8e88380) (1) Data frame sent I0111 18:26:29.710018 10 log.go:181] (0x8e882a0) (0x8e88380) Stream removed, broadcasting: 1 I0111 18:26:29.710144 10 log.go:181] (0x8e882a0) Go away received I0111 18:26:29.710441 10 log.go:181] (0x8e882a0) (0x8e88380) Stream removed, broadcasting: 1 I0111 18:26:29.710546 10 log.go:181] (0x8e882a0) (0x8f664d0) Stream removed, broadcasting: 3 I0111 18:26:29.710643 10 log.go:181] (0x8e882a0) (0x76411f0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jan 11 18:26:29.711: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8689 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 18:26:29.711: INFO: >>> kubeConfig: /root/.kube/config I0111 18:26:29.816591 10 log.go:181] (0x8323ce0) (0x8323d50) Create stream I0111 18:26:29.816719 10 log.go:181] (0x8323ce0) (0x8323d50) Stream added, broadcasting: 1 I0111 18:26:29.822022 10 log.go:181] (0x8323ce0) Reply frame received for 1 I0111 18:26:29.822324 10 log.go:181] (0x8323ce0) (0x8323f10) Create stream I0111 18:26:29.822468 10 log.go:181] (0x8323ce0) (0x8323f10) Stream added, broadcasting: 3 I0111 18:26:29.824164 10 log.go:181] (0x8323ce0) Reply frame received for 3 I0111 18:26:29.824333 10 log.go:181] (0x8323ce0) (0x8f671f0) Create stream I0111 18:26:29.824427 10 log.go:181] (0x8323ce0) (0x8f671f0) Stream added, broadcasting: 5 I0111 18:26:29.826256 10 log.go:181] (0x8323ce0) Reply frame received for 5 I0111 18:26:29.900802 10 log.go:181] (0x8323ce0) Data frame received for 3 I0111 18:26:29.901071 10 log.go:181] (0x8323f10) (3) Data frame handling I0111 18:26:29.901246 10 log.go:181] (0x8323f10) (3) Data frame sent I0111 18:26:29.901708 10 log.go:181] (0x8323ce0) Data frame received for 5 I0111 18:26:29.901916 10 log.go:181] (0x8f671f0) (5) Data frame handling I0111 18:26:29.902103 10 log.go:181] (0x8323ce0) Data frame received for 3 I0111 18:26:29.902257 10 log.go:181] (0x8323f10) (3) Data frame handling I0111 18:26:29.903715 10 log.go:181] (0x8323ce0) Data frame received for 1 I0111 18:26:29.903888 10 log.go:181] (0x8323d50) (1) Data frame handling I0111 18:26:29.904029 10 log.go:181] (0x8323d50) (1) Data frame sent I0111 18:26:29.904155 10 log.go:181] (0x8323ce0) (0x8323d50) Stream removed, broadcasting: 1 I0111 18:26:29.904314 10 log.go:181] (0x8323ce0) Go away received I0111 18:26:29.904624 10 log.go:181] (0x8323ce0) (0x8323d50) Stream removed, broadcasting: 1 I0111 18:26:29.904803 10 log.go:181] (0x8323ce0) (0x8323f10) Stream removed, broadcasting: 3 I0111 18:26:29.905045 10 log.go:181] (0x8323ce0) (0x8f671f0) Stream removed, broadcasting: 5 Jan 11 18:26:29.905: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:26:29.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8689" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":309,"completed":283,"skipped":5016,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:26:29.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 11 18:26:30.067: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:26:37.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-560" for this suite. • [SLOW TEST:7.912 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":309,"completed":284,"skipped":5025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:26:37.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with configMap that has name projected-configmap-test-upd-dfe06f63-e661-4b82-8b72-8acd8c703d23 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-dfe06f63-e661-4b82-8b72-8acd8c703d23 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:26:44.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1378" for this suite. • [SLOW TEST:6.292 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":285,"skipped":5079,"failed":0} S ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:26:44.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:26:44.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-6514" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":309,"completed":286,"skipped":5080,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:26:44.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-2a3bcf3a-560e-40cc-8fec-aae8dc7347e2 STEP: Creating a pod to test consume secrets Jan 11 18:26:44.471: INFO: Waiting up to 5m0s for pod "pod-secrets-d246afca-ca10-4d5e-9a4f-4852264beb18" in namespace "secrets-9070" to be "Succeeded or Failed" Jan 11 18:26:44.498: INFO: Pod "pod-secrets-d246afca-ca10-4d5e-9a4f-4852264beb18": Phase="Pending", Reason="", readiness=false. Elapsed: 26.384828ms Jan 11 18:26:46.577: INFO: Pod "pod-secrets-d246afca-ca10-4d5e-9a4f-4852264beb18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105160749s Jan 11 18:26:48.585: INFO: Pod "pod-secrets-d246afca-ca10-4d5e-9a4f-4852264beb18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113836269s STEP: Saw pod success Jan 11 18:26:48.585: INFO: Pod "pod-secrets-d246afca-ca10-4d5e-9a4f-4852264beb18" satisfied condition "Succeeded or Failed" Jan 11 18:26:48.590: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-d246afca-ca10-4d5e-9a4f-4852264beb18 container secret-volume-test: STEP: delete the pod Jan 11 18:26:48.668: INFO: Waiting for pod pod-secrets-d246afca-ca10-4d5e-9a4f-4852264beb18 to disappear Jan 11 18:26:48.676: INFO: Pod pod-secrets-d246afca-ca10-4d5e-9a4f-4852264beb18 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:26:48.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9070" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":287,"skipped":5081,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:26:48.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Jan 11 18:26:48.797: INFO: observed Pod pod-test in namespace pods-9052 in phase Pending conditions [] Jan 11 18:26:48.797: INFO: observed Pod pod-test in namespace pods-9052 in phase Pending conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 18:26:48 +0000 UTC }] Jan 11 18:26:48.833: INFO: observed Pod pod-test in namespace pods-9052 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 18:26:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 18:26:48 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-11 18:26:48 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-11 18:26:48 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Jan 11 18:26:51.924: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Jan 11 18:26:51.993: INFO: observed event type ADDED Jan 11 18:26:51.994: INFO: observed event type MODIFIED Jan 11 18:26:51.994: INFO: observed event type MODIFIED Jan 11 18:26:51.995: INFO: observed event type MODIFIED Jan 11 18:26:51.995: INFO: observed event type MODIFIED Jan 11 18:26:51.996: INFO: observed event type MODIFIED Jan 11 18:26:51.996: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:26:51.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9052" for this suite. •{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":309,"completed":288,"skipped":5099,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:26:52.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service multi-endpoint-test in namespace services-8937 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8937 to expose endpoints map[] Jan 11 18:26:52.558: INFO: successfully validated that service multi-endpoint-test in namespace services-8937 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-8937 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8937 to expose endpoints map[pod1:[100]] Jan 11 18:26:57.978: INFO: successfully validated that service multi-endpoint-test in namespace services-8937 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-8937 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8937 to expose endpoints map[pod1:[100] pod2:[101]] Jan 11 18:27:01.063: INFO: successfully validated that service multi-endpoint-test in namespace services-8937 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-8937 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8937 to expose endpoints map[pod2:[101]] Jan 11 18:27:01.150: INFO: successfully validated that service multi-endpoint-test in namespace services-8937 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-8937 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8937 to expose endpoints map[] Jan 11 18:27:01.453: INFO: successfully validated that service multi-endpoint-test in namespace services-8937 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:27:01.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8937" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:9.647 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":309,"completed":289,"skipped":5121,"failed":0} SSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:27:01.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-8096 STEP: creating service affinity-nodeport in namespace services-8096 STEP: creating replication controller affinity-nodeport in namespace services-8096 I0111 18:27:02.050636 10 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8096, replica count: 3 I0111 18:27:05.102213 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 18:27:08.103121 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 18:27:11.104094 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 18:27:11.126: INFO: Creating new exec pod Jan 11 18:27:16.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8096 exec execpod-affinitydwtwg -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jan 11 18:27:20.801: INFO: stderr: "I0111 18:27:20.685266 4903 log.go:181] (0x28d0460) (0x28d04d0) Create stream\nI0111 18:27:20.687241 4903 log.go:181] (0x28d0460) (0x28d04d0) Stream added, broadcasting: 1\nI0111 18:27:20.700947 4903 log.go:181] (0x28d0460) Reply frame received for 1\nI0111 18:27:20.701538 4903 log.go:181] (0x28d0460) (0x28d0770) Create stream\nI0111 18:27:20.701621 4903 log.go:181] (0x28d0460) (0x28d0770) Stream added, broadcasting: 3\nI0111 18:27:20.703521 4903 log.go:181] (0x28d0460) Reply frame received for 3\nI0111 18:27:20.704072 4903 log.go:181] (0x28d0460) (0x2592a10) Create stream\nI0111 18:27:20.704171 4903 log.go:181] (0x28d0460) (0x2592a10) Stream added, broadcasting: 5\nI0111 18:27:20.705913 4903 log.go:181] (0x28d0460) Reply frame received for 5\nI0111 18:27:20.785315 4903 log.go:181] (0x28d0460) Data frame received for 5\nI0111 18:27:20.785602 4903 log.go:181] (0x2592a10) (5) Data frame handling\nI0111 18:27:20.785816 4903 log.go:181] (0x28d0460) Data frame received for 3\nI0111 18:27:20.786009 4903 log.go:181] (0x28d0770) (3) Data frame handling\nI0111 18:27:20.787938 4903 log.go:181] (0x2592a10) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0111 18:27:20.788324 4903 log.go:181] (0x28d0460) Data frame received for 1\nI0111 18:27:20.788477 4903 log.go:181] (0x28d04d0) (1) Data frame handling\nI0111 18:27:20.788729 4903 log.go:181] (0x28d0460) Data frame received for 5\nI0111 18:27:20.788931 4903 log.go:181] (0x2592a10) (5) Data frame handling\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0111 18:27:20.789061 4903 log.go:181] (0x28d04d0) (1) Data frame sent\nI0111 18:27:20.789357 4903 log.go:181] (0x2592a10) (5) Data frame sent\nI0111 18:27:20.789504 4903 log.go:181] (0x28d0460) Data frame received for 5\nI0111 18:27:20.789575 4903 log.go:181] (0x2592a10) (5) Data frame handling\nI0111 18:27:20.790324 4903 log.go:181] (0x28d0460) (0x28d04d0) Stream removed, broadcasting: 1\nI0111 18:27:20.791295 4903 log.go:181] (0x28d0460) Go away received\nI0111 18:27:20.793203 4903 log.go:181] (0x28d0460) (0x28d04d0) Stream removed, broadcasting: 1\nI0111 18:27:20.793676 4903 log.go:181] (0x28d0460) (0x28d0770) Stream removed, broadcasting: 3\nI0111 18:27:20.793839 4903 log.go:181] (0x28d0460) (0x2592a10) Stream removed, broadcasting: 5\n" Jan 11 18:27:20.802: INFO: stdout: "" Jan 11 18:27:20.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8096 exec execpod-affinitydwtwg -- /bin/sh -x -c nc -zv -t -w 2 10.96.161.137 80' Jan 11 18:27:22.212: INFO: stderr: "I0111 18:27:22.119060 4923 log.go:181] (0x269e150) (0x269e1c0) Create stream\nI0111 18:27:22.121723 4923 log.go:181] (0x269e150) (0x269e1c0) Stream added, broadcasting: 1\nI0111 18:27:22.138856 4923 log.go:181] (0x269e150) Reply frame received for 1\nI0111 18:27:22.139368 4923 log.go:181] (0x269e150) (0x269e2a0) Create stream\nI0111 18:27:22.139431 4923 log.go:181] (0x269e150) (0x269e2a0) Stream added, broadcasting: 3\nI0111 18:27:22.140683 4923 log.go:181] (0x269e150) Reply frame received for 3\nI0111 18:27:22.141071 4923 log.go:181] (0x269e150) (0x295c0e0) Create stream\nI0111 18:27:22.141147 4923 log.go:181] (0x269e150) (0x295c0e0) Stream added, broadcasting: 5\nI0111 18:27:22.142269 4923 log.go:181] (0x269e150) Reply frame received for 5\nI0111 18:27:22.194074 4923 log.go:181] (0x269e150) Data frame received for 3\nI0111 18:27:22.194274 4923 log.go:181] (0x269e150) Data frame received for 5\nI0111 18:27:22.194681 4923 log.go:181] (0x295c0e0) (5) Data frame handling\nI0111 18:27:22.194996 4923 log.go:181] (0x269e2a0) (3) Data frame handling\nI0111 18:27:22.195621 4923 log.go:181] (0x269e150) Data frame received for 1\nI0111 18:27:22.195749 4923 log.go:181] (0x269e1c0) (1) Data frame handling\nI0111 18:27:22.196614 4923 log.go:181] (0x295c0e0) (5) Data frame sent\nI0111 18:27:22.196975 4923 log.go:181] (0x269e1c0) (1) Data frame sent\nI0111 18:27:22.197114 4923 log.go:181] (0x269e150) Data frame received for 5\nI0111 18:27:22.197227 4923 log.go:181] (0x295c0e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.161.137 80\nConnection to 10.96.161.137 80 port [tcp/http] succeeded!\nI0111 18:27:22.198608 4923 log.go:181] (0x269e150) (0x269e1c0) Stream removed, broadcasting: 1\nI0111 18:27:22.201517 4923 log.go:181] (0x269e150) Go away received\nI0111 18:27:22.203678 4923 log.go:181] (0x269e150) (0x269e1c0) Stream removed, broadcasting: 1\nI0111 18:27:22.203957 4923 log.go:181] (0x269e150) (0x269e2a0) Stream removed, broadcasting: 3\nI0111 18:27:22.204158 4923 log.go:181] (0x269e150) (0x295c0e0) Stream removed, broadcasting: 5\n" Jan 11 18:27:22.213: INFO: stdout: "" Jan 11 18:27:22.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8096 exec execpod-affinitydwtwg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30909' Jan 11 18:27:23.699: INFO: stderr: "I0111 18:27:23.581189 4943 log.go:181] (0x2f0c150) (0x2f0c1c0) Create stream\nI0111 18:27:23.582865 4943 log.go:181] (0x2f0c150) (0x2f0c1c0) Stream added, broadcasting: 1\nI0111 18:27:23.598843 4943 log.go:181] (0x2f0c150) Reply frame received for 1\nI0111 18:27:23.599453 4943 log.go:181] (0x2f0c150) (0x3008150) Create stream\nI0111 18:27:23.599541 4943 log.go:181] (0x2f0c150) (0x3008150) Stream added, broadcasting: 3\nI0111 18:27:23.601037 4943 log.go:181] (0x2f0c150) Reply frame received for 3\nI0111 18:27:23.601290 4943 log.go:181] (0x2f0c150) (0x3008310) Create stream\nI0111 18:27:23.601355 4943 log.go:181] (0x2f0c150) (0x3008310) Stream added, broadcasting: 5\nI0111 18:27:23.602323 4943 log.go:181] (0x2f0c150) Reply frame received for 5\nI0111 18:27:23.683302 4943 log.go:181] (0x2f0c150) Data frame received for 3\nI0111 18:27:23.683561 4943 log.go:181] (0x2f0c150) Data frame received for 5\nI0111 18:27:23.683809 4943 log.go:181] (0x3008310) (5) Data frame handling\nI0111 18:27:23.684254 4943 log.go:181] (0x2f0c150) Data frame received for 1\nI0111 18:27:23.684424 4943 log.go:181] (0x2f0c1c0) (1) Data frame handling\nI0111 18:27:23.684592 4943 log.go:181] (0x2f0c1c0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 30909\nConnection to 172.18.0.13 30909 port [tcp/30909] succeeded!\nI0111 18:27:23.686329 4943 log.go:181] (0x3008150) (3) Data frame handling\nI0111 18:27:23.686628 4943 log.go:181] (0x3008310) (5) Data frame sent\nI0111 18:27:23.686885 4943 log.go:181] (0x2f0c150) Data frame received for 5\nI0111 18:27:23.686983 4943 log.go:181] (0x3008310) (5) Data frame handling\nI0111 18:27:23.688149 4943 log.go:181] (0x2f0c150) (0x2f0c1c0) Stream removed, broadcasting: 1\nI0111 18:27:23.688747 4943 log.go:181] (0x2f0c150) Go away received\nI0111 18:27:23.691777 4943 log.go:181] (0x2f0c150) (0x2f0c1c0) Stream removed, broadcasting: 1\nI0111 18:27:23.692174 4943 log.go:181] (0x2f0c150) (0x3008150) Stream removed, broadcasting: 3\nI0111 18:27:23.692393 4943 log.go:181] (0x2f0c150) (0x3008310) Stream removed, broadcasting: 5\n" Jan 11 18:27:23.700: INFO: stdout: "" Jan 11 18:27:23.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8096 exec execpod-affinitydwtwg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30909' Jan 11 18:27:25.184: INFO: stderr: "I0111 18:27:25.038831 4963 log.go:181] (0x2fa3880) (0x2fa38f0) Create stream\nI0111 18:27:25.044415 4963 log.go:181] (0x2fa3880) (0x2fa38f0) Stream added, broadcasting: 1\nI0111 18:27:25.063063 4963 log.go:181] (0x2fa3880) Reply frame received for 1\nI0111 18:27:25.063551 4963 log.go:181] (0x2fa3880) (0x2d9a150) Create stream\nI0111 18:27:25.063626 4963 log.go:181] (0x2fa3880) (0x2d9a150) Stream added, broadcasting: 3\nI0111 18:27:25.065378 4963 log.go:181] (0x2fa3880) Reply frame received for 3\nI0111 18:27:25.065825 4963 log.go:181] (0x2fa3880) (0x2d9a310) Create stream\nI0111 18:27:25.065947 4963 log.go:181] (0x2fa3880) (0x2d9a310) Stream added, broadcasting: 5\nI0111 18:27:25.067900 4963 log.go:181] (0x2fa3880) Reply frame received for 5\nI0111 18:27:25.166434 4963 log.go:181] (0x2fa3880) Data frame received for 3\nI0111 18:27:25.166700 4963 log.go:181] (0x2fa3880) Data frame received for 1\nI0111 18:27:25.167110 4963 log.go:181] (0x2fa3880) Data frame received for 5\nI0111 18:27:25.167361 4963 log.go:181] (0x2fa38f0) (1) Data frame handling\nI0111 18:27:25.167638 4963 log.go:181] (0x2d9a310) (5) Data frame handling\nI0111 18:27:25.168077 4963 log.go:181] (0x2fa38f0) (1) Data frame sent\nI0111 18:27:25.168375 4963 log.go:181] (0x2d9a150) (3) Data frame handling\nI0111 18:27:25.168647 4963 log.go:181] (0x2d9a310) (5) Data frame sent\nI0111 18:27:25.168765 4963 log.go:181] (0x2fa3880) Data frame received for 5\nI0111 18:27:25.168962 4963 log.go:181] (0x2d9a310) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30909\nConnection to 172.18.0.12 30909 port [tcp/30909] succeeded!\nI0111 18:27:25.170916 4963 log.go:181] (0x2fa3880) (0x2fa38f0) Stream removed, broadcasting: 1\nI0111 18:27:25.173332 4963 log.go:181] (0x2fa3880) Go away received\nI0111 18:27:25.175413 4963 log.go:181] (0x2fa3880) (0x2fa38f0) Stream removed, broadcasting: 1\nI0111 18:27:25.175775 4963 log.go:181] (0x2fa3880) (0x2d9a150) Stream removed, broadcasting: 3\nI0111 18:27:25.175946 4963 log.go:181] (0x2fa3880) (0x2d9a310) Stream removed, broadcasting: 5\n" Jan 11 18:27:25.185: INFO: stdout: "" Jan 11 18:27:25.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8096 exec execpod-affinitydwtwg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:30909/ ; done' Jan 11 18:27:26.805: INFO: stderr: "I0111 18:27:26.549643 4983 log.go:181] (0x28aa930) (0x28aa9a0) Create stream\nI0111 18:27:26.552704 4983 log.go:181] (0x28aa930) (0x28aa9a0) Stream added, broadcasting: 1\nI0111 18:27:26.570150 4983 log.go:181] (0x28aa930) Reply frame received for 1\nI0111 18:27:26.570589 4983 log.go:181] (0x28aa930) (0x28aa070) Create stream\nI0111 18:27:26.570648 4983 log.go:181] (0x28aa930) (0x28aa070) Stream added, broadcasting: 3\nI0111 18:27:26.571814 4983 log.go:181] (0x28aa930) Reply frame received for 3\nI0111 18:27:26.572032 4983 log.go:181] (0x28aa930) (0x28aa460) Create stream\nI0111 18:27:26.572086 4983 log.go:181] (0x28aa930) (0x28aa460) Stream added, broadcasting: 5\nI0111 18:27:26.573001 4983 log.go:181] (0x28aa930) Reply frame received for 5\nI0111 18:27:26.670378 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.670597 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.670775 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.670882 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.671125 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.671412 4983 log.go:181] (0x28aa460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.696392 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.696581 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.696739 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.697413 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.697485 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -sI0111 18:27:26.697583 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.697789 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.697957 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.698219 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.698385 4983 log.go:181] (0x28aa460) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.698540 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.698666 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.705478 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.705659 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.705860 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.706790 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.706965 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.707076 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.707171 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.707268 4983 log.go:181] (0x28aa460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.707349 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.714040 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.714176 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.714320 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.715191 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.715309 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.715449 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.715649 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.715801 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.715901 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.722361 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.722490 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.722659 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.722855 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.722955 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.723054 4983 log.go:181] (0x28aa460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.723172 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.723287 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.723396 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.728960 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.729051 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.729139 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.729575 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.729714 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.729815 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.729907 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.729989 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.730139 4983 log.go:181] (0x28aa460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.734771 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.734866 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.734964 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.735870 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.736030 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.736202 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.736315 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.736402 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.736519 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.740184 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.740342 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.740491 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.740660 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.740811 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.741075 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.741252 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -sI0111 18:27:26.741351 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.741477 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.741565 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.741679 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.741805 4983 log.go:181] (0x28aa460) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.745844 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.745972 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.746124 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.746775 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.746874 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0111 18:27:26.747055 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.747303 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.747470 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.747616 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.747746 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.747864 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.748005 4983 log.go:181] (0x28aa460) (5) Data frame sent\n 2 http://172.18.0.13:30909/\nI0111 18:27:26.750118 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.750197 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.750277 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.751204 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.751311 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.751448 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.751633 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.751795 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.751950 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.756234 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.756312 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.756413 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.757074 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.757189 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.757286 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.757416 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.757500 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.757627 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.760998 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.761097 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.761199 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.761992 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.762080 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.762232 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.762342 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.762483 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.762596 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.769896 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.770032 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.770153 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.770251 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.770346 4983 log.go:181] (0x28aa460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.770414 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.770475 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.770528 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.770601 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.774616 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.774694 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.774762 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.775311 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.775392 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.775459 4983 log.go:181] (0x28aa460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.775545 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.775647 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.775739 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.778963 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.779057 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.779182 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.779631 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.779718 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.779801 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.779869 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.779946 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.780054 4983 log.go:181] (0x28aa460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30909/\nI0111 18:27:26.783502 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.783685 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.783870 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.784060 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.784229 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.784352 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.784566 4983 log.go:181] (0x28aa460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0111 18:27:26.784748 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.785023 4983 log.go:181] (0x28aa460) (5) Data frame sent\nI0111 18:27:26.785159 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.785254 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.785356 4983 log.go:181] (0x28aa460) (5) Data frame sent\n 2 http://172.18.0.13:30909/\nI0111 18:27:26.788610 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.788754 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.789036 4983 log.go:181] (0x28aa070) (3) Data frame sent\nI0111 18:27:26.789293 4983 log.go:181] (0x28aa930) Data frame received for 3\nI0111 18:27:26.789391 4983 log.go:181] (0x28aa070) (3) Data frame handling\nI0111 18:27:26.789675 4983 log.go:181] (0x28aa930) Data frame received for 5\nI0111 18:27:26.789825 4983 log.go:181] (0x28aa460) (5) Data frame handling\nI0111 18:27:26.791118 4983 log.go:181] (0x28aa930) Data frame received for 1\nI0111 18:27:26.791187 4983 log.go:181] (0x28aa9a0) (1) Data frame handling\nI0111 18:27:26.791259 4983 log.go:181] (0x28aa9a0) (1) Data frame sent\nI0111 18:27:26.792453 4983 log.go:181] (0x28aa930) (0x28aa9a0) Stream removed, broadcasting: 1\nI0111 18:27:26.794184 4983 log.go:181] (0x28aa930) Go away received\nI0111 18:27:26.797403 4983 log.go:181] (0x28aa930) (0x28aa9a0) Stream removed, broadcasting: 1\nI0111 18:27:26.797627 4983 log.go:181] (0x28aa930) (0x28aa070) Stream removed, broadcasting: 3\nI0111 18:27:26.797783 4983 log.go:181] (0x28aa930) (0x28aa460) Stream removed, broadcasting: 5\n" Jan 11 18:27:26.808: INFO: stdout: "\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x\naffinity-nodeport-ksl4x" Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Received response from host: affinity-nodeport-ksl4x Jan 11 18:27:26.809: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-8096, will wait for the garbage collector to delete the pods Jan 11 18:27:26.926: INFO: Deleting ReplicationController affinity-nodeport took: 23.653871ms Jan 11 18:27:27.528: INFO: Terminating ReplicationController affinity-nodeport pods took: 601.216571ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:28:30.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8096" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:89.034 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":290,"skipped":5124,"failed":0} [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:28:30.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating Pod STEP: Reading file content from the nginx-container Jan 11 18:28:34.891: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4236 PodName:pod-sharedvolume-2977440e-d68b-4331-89b4-3aa4ccba859e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 18:28:34.891: INFO: >>> kubeConfig: /root/.kube/config I0111 18:28:34.998418 10 log.go:181] (0x8e88620) (0x8e88700) Create stream I0111 18:28:34.998577 10 log.go:181] (0x8e88620) (0x8e88700) Stream added, broadcasting: 1 I0111 18:28:35.004104 10 log.go:181] (0x8e88620) Reply frame received for 1 I0111 18:28:35.004362 10 log.go:181] (0x8e88620) (0xa278150) Create stream I0111 18:28:35.004497 10 log.go:181] (0x8e88620) (0xa278150) Stream added, broadcasting: 3 I0111 18:28:35.006593 10 log.go:181] (0x8e88620) Reply frame received for 3 I0111 18:28:35.006815 10 log.go:181] (0x8e88620) (0x8f5c380) Create stream I0111 18:28:35.006877 10 log.go:181] (0x8e88620) (0x8f5c380) Stream added, broadcasting: 5 I0111 18:28:35.008196 10 log.go:181] (0x8e88620) Reply frame received for 5 I0111 18:28:35.059704 10 log.go:181] (0x8e88620) Data frame received for 5 I0111 18:28:35.059848 10 log.go:181] (0x8f5c380) (5) Data frame handling I0111 18:28:35.060060 10 log.go:181] (0x8e88620) Data frame received for 3 I0111 18:28:35.060289 10 log.go:181] (0xa278150) (3) Data frame handling I0111 18:28:35.060443 10 log.go:181] (0xa278150) (3) Data frame sent I0111 18:28:35.060594 10 log.go:181] (0x8e88620) Data frame received for 3 I0111 18:28:35.060772 10 log.go:181] (0xa278150) (3) Data frame handling I0111 18:28:35.061444 10 log.go:181] (0x8e88620) Data frame received for 1 I0111 18:28:35.061647 10 log.go:181] (0x8e88700) (1) Data frame handling I0111 18:28:35.061845 10 log.go:181] (0x8e88700) (1) Data frame sent I0111 18:28:35.062032 10 log.go:181] (0x8e88620) (0x8e88700) Stream removed, broadcasting: 1 I0111 18:28:35.062277 10 log.go:181] (0x8e88620) Go away received I0111 18:28:35.062828 10 log.go:181] (0x8e88620) (0x8e88700) Stream removed, broadcasting: 1 I0111 18:28:35.063032 10 log.go:181] (0x8e88620) (0xa278150) Stream removed, broadcasting: 3 I0111 18:28:35.063229 10 log.go:181] (0x8e88620) (0x8f5c380) Stream removed, broadcasting: 5 Jan 11 18:28:35.063: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:28:35.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4236" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":309,"completed":291,"skipped":5124,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:28:35.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 18:28:54.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 18:28:56.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986535, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986535, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986535, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986534, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 18:28:58.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986535, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986535, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986535, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986534, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 18:29:01.852: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:29:02.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2882" for this suite. STEP: Destroying namespace "webhook-2882-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:27.420 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":309,"completed":292,"skipped":5137,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:29:02.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Starting the proxy Jan 11 18:29:02.640: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9399 proxy --unix-socket=/tmp/kubectl-proxy-unix807940581/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:29:03.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9399" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":309,"completed":293,"skipped":5147,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:29:03.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9062.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9062.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9062.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9062.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 18:29:09.827: INFO: DNS probes using dns-9062/dns-test-6a7d81f4-eb79-44c8-840b-24db9a4df8d9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:29:09.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9062" for this suite. • [SLOW TEST:6.422 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":309,"completed":294,"skipped":5153,"failed":0} [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:29:09.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Jan 11 18:29:10.492: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Jan 11 18:29:10.547: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:29:10.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-9484" for this suite. •{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":309,"completed":295,"skipped":5153,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:29:10.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 11 18:29:18.972: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 18:29:18.981: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 18:29:20.982: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 18:29:21.009: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 18:29:22.982: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 18:29:22.992: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 18:29:24.982: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 18:29:24.991: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 18:29:26.982: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 18:29:26.992: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 18:29:28.982: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 18:29:28.990: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 18:29:30.982: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 18:29:30.990: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:29:30.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-429" for this suite. • [SLOW TEST:20.271 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":309,"completed":296,"skipped":5172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:29:31.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 11 18:29:31.173: INFO: Waiting up to 5m0s for pod "downward-api-89ef3cde-f070-4ad6-9ac8-81513bbfdf65" in namespace "downward-api-7607" to be "Succeeded or Failed" Jan 11 18:29:31.192: INFO: Pod "downward-api-89ef3cde-f070-4ad6-9ac8-81513bbfdf65": Phase="Pending", Reason="", readiness=false. Elapsed: 18.534482ms Jan 11 18:29:33.200: INFO: Pod "downward-api-89ef3cde-f070-4ad6-9ac8-81513bbfdf65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026961725s Jan 11 18:29:35.210: INFO: Pod "downward-api-89ef3cde-f070-4ad6-9ac8-81513bbfdf65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036297649s STEP: Saw pod success Jan 11 18:29:35.210: INFO: Pod "downward-api-89ef3cde-f070-4ad6-9ac8-81513bbfdf65" satisfied condition "Succeeded or Failed" Jan 11 18:29:35.216: INFO: Trying to get logs from node leguer-worker pod downward-api-89ef3cde-f070-4ad6-9ac8-81513bbfdf65 container dapi-container: STEP: delete the pod Jan 11 18:29:35.270: INFO: Waiting for pod downward-api-89ef3cde-f070-4ad6-9ac8-81513bbfdf65 to disappear Jan 11 18:29:35.278: INFO: Pod downward-api-89ef3cde-f070-4ad6-9ac8-81513bbfdf65 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:29:35.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7607" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":309,"completed":297,"skipped":5197,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:29:35.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 11 18:29:35.406: INFO: PodSpec: initContainers in spec.initContainers Jan 11 18:30:24.989: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9e65e4a4-dcae-48cd-94b3-440b4a13c8f6", GenerateName:"", Namespace:"init-container-7526", SelfLink:"", UID:"2dc76f1e-7a24-4084-be21-ff4749fae07a", ResourceVersion:"218923", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745986575, loc:(*time.Location)(0x5f133f0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"405383804"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x774c660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x8754880)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x774c700), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x8754890)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6zfll", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x774c900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6zfll", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6zfll", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6zfll", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x8490ef8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"leguer-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x8a8a4c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x8490f80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x8490fa0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x8490fa8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x8490fac), PreemptionPolicy:(*v1.PreemptionPolicy)(0x9ec2630), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986575, loc:(*time.Location)(0x5f133f0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986575, loc:(*time.Location)(0x5f133f0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986575, loc:(*time.Location)(0x5f133f0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986575, loc:(*time.Location)(0x5f133f0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.149", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.149"}}, StartTime:(*v1.Time)(0x774ce40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x9ad1090)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x9ad10e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f20d624590602a485dc73a9248dc22931bdbd5800f99d87db86f2c7d0ce27116", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x87548b0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x87548a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x849102f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:30:24.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7526" for this suite. • [SLOW TEST:49.720 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":309,"completed":298,"skipped":5201,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:30:25.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 11 18:30:25.143: INFO: Waiting up to 5m0s for pod "pod-9cd7c80d-911c-431f-b650-12253229622c" in namespace "emptydir-1643" to be "Succeeded or Failed" Jan 11 18:30:25.163: INFO: Pod "pod-9cd7c80d-911c-431f-b650-12253229622c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.100318ms Jan 11 18:30:27.172: INFO: Pod "pod-9cd7c80d-911c-431f-b650-12253229622c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028529859s Jan 11 18:30:29.180: INFO: Pod "pod-9cd7c80d-911c-431f-b650-12253229622c": Phase="Running", Reason="", readiness=true. Elapsed: 4.036211241s Jan 11 18:30:32.813: INFO: Pod "pod-9cd7c80d-911c-431f-b650-12253229622c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.67003429s STEP: Saw pod success Jan 11 18:30:32.814: INFO: Pod "pod-9cd7c80d-911c-431f-b650-12253229622c" satisfied condition "Succeeded or Failed" Jan 11 18:30:32.820: INFO: Trying to get logs from node leguer-worker2 pod pod-9cd7c80d-911c-431f-b650-12253229622c container test-container: STEP: delete the pod Jan 11 18:30:33.364: INFO: Waiting for pod pod-9cd7c80d-911c-431f-b650-12253229622c to disappear Jan 11 18:30:33.382: INFO: Pod pod-9cd7c80d-911c-431f-b650-12253229622c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:30:33.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1643" for this suite. • [SLOW TEST:8.354 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":299,"skipped":5205,"failed":0} SSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:30:33.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8024 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8024;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8024 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8024;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8024.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8024.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8024.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8024.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8024.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8024.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8024.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8024.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8024.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8024.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8024.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 230.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.230_udp@PTR;check="$$(dig +tcp +noall +answer +search 230.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.230_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8024 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8024;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8024 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8024;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8024.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8024.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8024.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8024.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8024.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8024.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8024.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8024.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8024.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8024.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8024.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8024.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 230.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.230_udp@PTR;check="$$(dig +tcp +noall +answer +search 230.123.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.123.230_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 18:30:43.651: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.656: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.660: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.664: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.669: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.674: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.679: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.683: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.737: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.741: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.746: INFO: Unable to read jessie_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.750: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.755: INFO: Unable to read jessie_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.759: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.763: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.767: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:43.792: INFO: Lookups using dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8024 wheezy_tcp@dns-test-service.dns-8024 wheezy_udp@dns-test-service.dns-8024.svc wheezy_tcp@dns-test-service.dns-8024.svc wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8024 jessie_tcp@dns-test-service.dns-8024 jessie_udp@dns-test-service.dns-8024.svc jessie_tcp@dns-test-service.dns-8024.svc jessie_udp@_http._tcp.dns-test-service.dns-8024.svc jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc] Jan 11 18:30:48.799: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.803: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.807: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.811: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.814: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.842: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.846: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.849: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.877: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.881: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.885: INFO: Unable to read jessie_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.888: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.892: INFO: Unable to read jessie_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.896: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.900: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.903: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:48.928: INFO: Lookups using dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8024 wheezy_tcp@dns-test-service.dns-8024 wheezy_udp@dns-test-service.dns-8024.svc wheezy_tcp@dns-test-service.dns-8024.svc wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8024 jessie_tcp@dns-test-service.dns-8024 jessie_udp@dns-test-service.dns-8024.svc jessie_tcp@dns-test-service.dns-8024.svc jessie_udp@_http._tcp.dns-test-service.dns-8024.svc jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc] Jan 11 18:30:53.799: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.805: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.809: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.818: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.827: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.832: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.866: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.872: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.877: INFO: Unable to read jessie_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.882: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.886: INFO: Unable to read jessie_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.891: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.896: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.901: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:53.931: INFO: Lookups using dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8024 wheezy_tcp@dns-test-service.dns-8024 wheezy_udp@dns-test-service.dns-8024.svc wheezy_tcp@dns-test-service.dns-8024.svc wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8024 jessie_tcp@dns-test-service.dns-8024 jessie_udp@dns-test-service.dns-8024.svc jessie_tcp@dns-test-service.dns-8024.svc jessie_udp@_http._tcp.dns-test-service.dns-8024.svc jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc] Jan 11 18:30:58.809: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.814: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.818: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.827: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.831: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.835: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.839: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.867: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.871: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.875: INFO: Unable to read jessie_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.896: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.901: INFO: Unable to read jessie_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.907: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.911: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.914: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:30:58.941: INFO: Lookups using dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8024 wheezy_tcp@dns-test-service.dns-8024 wheezy_udp@dns-test-service.dns-8024.svc wheezy_tcp@dns-test-service.dns-8024.svc wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8024 jessie_tcp@dns-test-service.dns-8024 jessie_udp@dns-test-service.dns-8024.svc jessie_tcp@dns-test-service.dns-8024.svc jessie_udp@_http._tcp.dns-test-service.dns-8024.svc jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc] Jan 11 18:31:03.800: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.805: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.818: INFO: Unable to read wheezy_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.822: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.825: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.829: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.858: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.863: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.868: INFO: Unable to read jessie_udp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.872: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024 from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.877: INFO: Unable to read jessie_udp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.881: INFO: Unable to read jessie_tcp@dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.886: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.891: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc from pod dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d: the server could not find the requested resource (get pods dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d) Jan 11 18:31:03.915: INFO: Lookups using dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8024 wheezy_tcp@dns-test-service.dns-8024 wheezy_udp@dns-test-service.dns-8024.svc wheezy_tcp@dns-test-service.dns-8024.svc wheezy_udp@_http._tcp.dns-test-service.dns-8024.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8024.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8024 jessie_tcp@dns-test-service.dns-8024 jessie_udp@dns-test-service.dns-8024.svc jessie_tcp@dns-test-service.dns-8024.svc jessie_udp@_http._tcp.dns-test-service.dns-8024.svc jessie_tcp@_http._tcp.dns-test-service.dns-8024.svc] Jan 11 18:31:09.055: INFO: DNS probes using dns-8024/dns-test-a595426b-e081-4e1f-93ae-f725fca9b83d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:31:09.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8024" for this suite. • [SLOW TEST:36.583 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":309,"completed":300,"skipped":5208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:31:09.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:31:10.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-929" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":309,"completed":301,"skipped":5256,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:31:10.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 11 18:31:14.255: INFO: &Pod{ObjectMeta:{send-events-dc6e46fe-fe2e-48f3-a593-ea18f0d836be events-6628 50179435-82ba-4093-834b-76f52b505f7d 219143 0 2021-01-11 18:31:10 +0000 UTC map[name:foo time:200182850] map[] [] [] [{e2e.test Update v1 2021-01-11 18:31:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-11 18:31:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bdr5b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bdr5b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bdr5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:31:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:31:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:31:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-11 18:31:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.150,StartTime:2021-01-11 18:31:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-11 18:31:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://d6e629982feecab2d0e333205f9ac13ee72a5c89e25958217e620f31aeebbfb4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jan 11 18:31:16.267: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 11 18:31:18.277: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:31:18.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6628" for this suite. • [SLOW TEST:8.188 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":309,"completed":302,"skipped":5261,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:31:18.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 11 18:31:18.416: INFO: Waiting up to 5m0s for pod "pod-72f40477-29ce-4338-8131-40dc8e0cf0e8" in namespace "emptydir-1937" to be "Succeeded or Failed" Jan 11 18:31:18.433: INFO: Pod "pod-72f40477-29ce-4338-8131-40dc8e0cf0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.320083ms Jan 11 18:31:20.442: INFO: Pod "pod-72f40477-29ce-4338-8131-40dc8e0cf0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026037088s Jan 11 18:31:22.463: INFO: Pod "pod-72f40477-29ce-4338-8131-40dc8e0cf0e8": Phase="Running", Reason="", readiness=true. Elapsed: 4.047301907s Jan 11 18:31:24.473: INFO: Pod "pod-72f40477-29ce-4338-8131-40dc8e0cf0e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056644004s STEP: Saw pod success Jan 11 18:31:24.473: INFO: Pod "pod-72f40477-29ce-4338-8131-40dc8e0cf0e8" satisfied condition "Succeeded or Failed" Jan 11 18:31:24.478: INFO: Trying to get logs from node leguer-worker2 pod pod-72f40477-29ce-4338-8131-40dc8e0cf0e8 container test-container: STEP: delete the pod Jan 11 18:31:24.514: INFO: Waiting for pod pod-72f40477-29ce-4338-8131-40dc8e0cf0e8 to disappear Jan 11 18:31:24.527: INFO: Pod pod-72f40477-29ce-4338-8131-40dc8e0cf0e8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:31:24.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1937" for this suite. • [SLOW TEST:6.228 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":303,"skipped":5267,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:31:24.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 18:31:31.686: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 18:31:33.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986691, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986691, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986691, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986691, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 18:31:36.967: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 11 18:31:41.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=webhook-7713 attach --namespace=webhook-7713 to-be-attached-pod -i -c=container1' Jan 11 18:31:42.355: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:31:42.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7713" for this suite. STEP: Destroying namespace "webhook-7713-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:17.986 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":309,"completed":304,"skipped":5275,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:31:42.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 18:31:50.865: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 18:31:52.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986710, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986710, loc:(*time.Location)(0x5f133f0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986711, loc:(*time.Location)(0x5f133f0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745986710, loc:(*time.Location)(0x5f133f0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 18:31:55.935: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 11 18:31:55.967: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:31:55.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5120" for this suite. STEP: Destroying namespace "webhook-5120-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.603 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":309,"completed":305,"skipped":5290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:31:56.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:32:25.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7683" for this suite. • [SLOW TEST:29.069 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":309,"completed":306,"skipped":5322,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:32:25.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-dbf66525-38bb-4253-b662-047a11c673e3 STEP: Creating a pod to test consume configMaps Jan 11 18:32:25.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc85dc3f-16b2-4a53-93cf-4b16f98c4574" in namespace "configmap-2078" to be "Succeeded or Failed" Jan 11 18:32:25.377: INFO: Pod "pod-configmaps-dc85dc3f-16b2-4a53-93cf-4b16f98c4574": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219129ms Jan 11 18:32:27.385: INFO: Pod "pod-configmaps-dc85dc3f-16b2-4a53-93cf-4b16f98c4574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01200291s Jan 11 18:32:29.394: INFO: Pod "pod-configmaps-dc85dc3f-16b2-4a53-93cf-4b16f98c4574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021446945s STEP: Saw pod success Jan 11 18:32:29.394: INFO: Pod "pod-configmaps-dc85dc3f-16b2-4a53-93cf-4b16f98c4574" satisfied condition "Succeeded or Failed" Jan 11 18:32:29.400: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-dc85dc3f-16b2-4a53-93cf-4b16f98c4574 container agnhost-container: STEP: delete the pod Jan 11 18:32:29.469: INFO: Waiting for pod pod-configmaps-dc85dc3f-16b2-4a53-93cf-4b16f98c4574 to disappear Jan 11 18:32:29.494: INFO: Pod pod-configmaps-dc85dc3f-16b2-4a53-93cf-4b16f98c4574 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:32:29.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2078" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":309,"completed":307,"skipped":5330,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:32:29.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:32:29.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1166" for this suite. STEP: Destroying namespace "nspatchtest-f5ec6110-554c-428e-a1a7-f70db59b85da-6471" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":309,"completed":308,"skipped":5333,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 11 18:32:29.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-36d48817-acdc-4436-abf5-8a75bf6b8a5c STEP: Creating a pod to test consume configMaps Jan 11 18:32:29.784: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b33835dd-448e-40d7-9c07-ae997819ef7e" in namespace "projected-1204" to be "Succeeded or Failed" Jan 11 18:32:29.794: INFO: Pod "pod-projected-configmaps-b33835dd-448e-40d7-9c07-ae997819ef7e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.232598ms Jan 11 18:32:31.801: INFO: Pod "pod-projected-configmaps-b33835dd-448e-40d7-9c07-ae997819ef7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017134547s Jan 11 18:32:33.810: INFO: Pod "pod-projected-configmaps-b33835dd-448e-40d7-9c07-ae997819ef7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025843826s STEP: Saw pod success Jan 11 18:32:33.810: INFO: Pod "pod-projected-configmaps-b33835dd-448e-40d7-9c07-ae997819ef7e" satisfied condition "Succeeded or Failed" Jan 11 18:32:33.818: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-b33835dd-448e-40d7-9c07-ae997819ef7e container agnhost-container: STEP: delete the pod Jan 11 18:32:33.855: INFO: Waiting for pod pod-projected-configmaps-b33835dd-448e-40d7-9c07-ae997819ef7e to disappear Jan 11 18:32:33.866: INFO: Pod pod-projected-configmaps-b33835dd-448e-40d7-9c07-ae997819ef7e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 18:32:33.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1204" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":309,"skipped":5348,"failed":0} SSSSSSSSSSJan 11 18:32:33.882: INFO: Running AfterSuite actions on all nodes Jan 11 18:32:33.883: INFO: Running AfterSuite actions on node 1 Jan 11 18:32:33.883: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":309,"completed":309,"skipped":5358,"failed":0} Ran 309 of 5667 Specs in 8743.081 seconds SUCCESS! -- 309 Passed | 0 Failed | 0 Pending | 5358 Skipped PASS