I0816 20:03:01.053777 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0816 20:03:01.060080 7 e2e.go:109] Starting e2e run "e5ea7438-204b-4c86-a5be-a155722d35c4" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597608168 - Will randomize all specs Will run 278 of 4844 specs Aug 16 20:03:01.603: INFO: >>> kubeConfig: /root/.kube/config Aug 16 20:03:01.660: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 16 20:03:01.846: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 16 20:03:02.033: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 16 20:03:02.033: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 16 20:03:02.033: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 16 20:03:02.078: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 16 20:03:02.078: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 16 20:03:02.078: INFO: e2e test version: v1.17.11 Aug 16 20:03:02.083: INFO: kube-apiserver version: v1.17.5 Aug 16 20:03:02.086: INFO: >>> kubeConfig: /root/.kube/config Aug 16 20:03:02.106: INFO: Cluster IP family: ipv4 SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:03:02.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Aug 16 20:03:02.206: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 20:03:06.328: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 20:03:08.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733204986, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733204986, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733204986, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733204986, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 20:03:11.416: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:03:11.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9601" for this suite. STEP: Destroying namespace "webhook-9601-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.685 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":1,"skipped":2,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:03:11.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 16 20:03:16.498: INFO: Successfully updated pod "labelsupdate6db54978-f638-44da-822b-6c92f6eb1bd5" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:03:18.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1997" for this suite. • [SLOW TEST:6.814 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":3,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:03:18.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-ebd74ff6-bb33-4bd4-85c5-a689877b10e7 STEP: Creating a pod to test consume configMaps Aug 16 20:03:21.019: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1" in namespace "projected-678" to be "success or failure" Aug 16 20:03:21.040: INFO: Pod "pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.536551ms Aug 16 20:03:23.048: INFO: Pod "pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029539622s Aug 16 20:03:25.199: INFO: Pod "pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1": Phase="Running", Reason="", readiness=true. Elapsed: 4.180076696s Aug 16 20:03:27.208: INFO: Pod "pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189121908s STEP: Saw pod success Aug 16 20:03:27.209: INFO: Pod "pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1" satisfied condition "success or failure" Aug 16 20:03:27.213: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1 container projected-configmap-volume-test: STEP: delete the pod Aug 16 20:03:27.266: INFO: Waiting for pod pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1 to disappear Aug 16 20:03:27.274: INFO: Pod pod-projected-configmaps-93a0d734-cb11-44b1-87eb-696618ed6fb1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:03:27.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-678" for this suite. • [SLOW TEST:8.673 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":6,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:03:27.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Aug 16 20:03:29.298: INFO: Waiting up to 5m0s for pod "client-containers-8919c112-4396-4a84-939e-8de783ad3cc0" in namespace "containers-3285" to be "success or failure" Aug 16 20:03:29.546: INFO: Pod "client-containers-8919c112-4396-4a84-939e-8de783ad3cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 246.890242ms Aug 16 20:03:31.551: INFO: Pod "client-containers-8919c112-4396-4a84-939e-8de783ad3cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252723547s Aug 16 20:03:33.671: INFO: Pod "client-containers-8919c112-4396-4a84-939e-8de783ad3cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372804412s Aug 16 20:03:35.752: INFO: Pod "client-containers-8919c112-4396-4a84-939e-8de783ad3cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453851209s Aug 16 20:03:37.759: INFO: Pod "client-containers-8919c112-4396-4a84-939e-8de783ad3cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.460365735s Aug 16 20:03:39.766: INFO: Pod "client-containers-8919c112-4396-4a84-939e-8de783ad3cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.466975053s STEP: Saw pod success Aug 16 20:03:39.766: INFO: Pod "client-containers-8919c112-4396-4a84-939e-8de783ad3cc0" satisfied condition "success or failure" Aug 16 20:03:39.771: INFO: Trying to get logs from node jerma-worker2 pod client-containers-8919c112-4396-4a84-939e-8de783ad3cc0 container test-container: STEP: delete the pod Aug 16 20:03:39.789: INFO: Waiting for pod client-containers-8919c112-4396-4a84-939e-8de783ad3cc0 to disappear Aug 16 20:03:39.793: INFO: Pod client-containers-8919c112-4396-4a84-939e-8de783ad3cc0 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:03:39.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3285" for this suite. • [SLOW TEST:12.515 seconds] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":11,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:03:39.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ae897753-1163-440f-8441-16f790379b6e STEP: Creating a pod to test consume secrets Aug 16 20:03:39.915: INFO: Waiting up to 5m0s for pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23" in namespace "secrets-2824" to be "success or failure" Aug 16 20:03:39.933: INFO: Pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23": Phase="Pending", Reason="", readiness=false. Elapsed: 17.465925ms Aug 16 20:03:41.940: INFO: Pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024533411s Aug 16 20:03:43.947: INFO: Pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031287581s Aug 16 20:03:46.307: INFO: Pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391545227s Aug 16 20:03:48.313: INFO: Pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.397860104s Aug 16 20:03:50.319: INFO: Pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23": Phase="Pending", Reason="", readiness=false. Elapsed: 10.404043921s Aug 16 20:03:53.859: INFO: Pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.943902016s STEP: Saw pod success Aug 16 20:03:53.860: INFO: Pod "pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23" satisfied condition "success or failure" Aug 16 20:03:54.313: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23 container secret-volume-test: STEP: delete the pod Aug 16 20:03:54.550: INFO: Waiting for pod pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23 to disappear Aug 16 20:03:54.607: INFO: Pod pod-secrets-9250c07c-c584-4168-8a55-fa6818382d23 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:03:54.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2824" for this suite. • [SLOW TEST:14.853 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:03:54.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Aug 16 20:04:00.757: INFO: Pod pod-hostip-171233f6-485a-4b1e-ab72-b6dd33e5054c has hostIP: 172.18.0.3 [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:04:00.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3745" for this suite. • [SLOW TEST:6.113 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:04:00.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 16 20:04:08.997: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 16 20:04:09.005: INFO: Pod pod-with-prestop-exec-hook still exists Aug 16 20:04:11.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 16 20:04:11.010: INFO: Pod pod-with-prestop-exec-hook still exists Aug 16 20:04:13.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 16 20:04:13.012: INFO: Pod pod-with-prestop-exec-hook still exists Aug 16 20:04:15.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 16 20:04:15.010: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:04:15.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-435" for this suite. • [SLOW TEST:14.257 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:04:15.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:04:15.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2272" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":8,"skipped":120,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:04:15.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:04:15.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa71d791-23ed-4d70-b4a5-317ee270333d" in namespace "projected-5135" to be "success or failure" Aug 16 20:04:15.582: INFO: Pod "downwardapi-volume-fa71d791-23ed-4d70-b4a5-317ee270333d": Phase="Pending", Reason="", readiness=false. Elapsed: 127.011537ms Aug 16 20:04:17.778: INFO: Pod "downwardapi-volume-fa71d791-23ed-4d70-b4a5-317ee270333d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323511251s Aug 16 20:04:19.784: INFO: Pod "downwardapi-volume-fa71d791-23ed-4d70-b4a5-317ee270333d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329253277s STEP: Saw pod success Aug 16 20:04:19.784: INFO: Pod "downwardapi-volume-fa71d791-23ed-4d70-b4a5-317ee270333d" satisfied condition "success or failure" Aug 16 20:04:19.789: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-fa71d791-23ed-4d70-b4a5-317ee270333d container client-container: STEP: delete the pod Aug 16 20:04:19.822: INFO: Waiting for pod downwardapi-volume-fa71d791-23ed-4d70-b4a5-317ee270333d to disappear Aug 16 20:04:19.830: INFO: Pod downwardapi-volume-fa71d791-23ed-4d70-b4a5-317ee270333d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:04:19.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5135" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:04:19.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-672b285f-a2f8-4224-8040-02cae7e71774 STEP: Creating secret with name s-test-opt-upd-62c790e0-5bcc-4bcf-af23-27cf4cb2480b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-672b285f-a2f8-4224-8040-02cae7e71774 STEP: Updating secret s-test-opt-upd-62c790e0-5bcc-4bcf-af23-27cf4cb2480b STEP: Creating secret with name s-test-opt-create-727d92b2-2726-486e-9280-65694d7ed22e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:04:32.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1831" for this suite. • [SLOW TEST:12.494 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":166,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:04:32.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Aug 16 20:04:32.474: INFO: Waiting up to 5m0s for pod "pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840" in namespace "emptydir-4627" to be "success or failure" Aug 16 20:04:32.513: INFO: Pod "pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840": Phase="Pending", Reason="", readiness=false. Elapsed: 38.67536ms Aug 16 20:04:34.519: INFO: Pod "pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044631671s Aug 16 20:04:36.541: INFO: Pod "pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066426082s Aug 16 20:04:38.546: INFO: Pod "pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071192842s STEP: Saw pod success Aug 16 20:04:38.546: INFO: Pod "pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840" satisfied condition "success or failure" Aug 16 20:04:38.550: INFO: Trying to get logs from node jerma-worker pod pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840 container test-container: STEP: delete the pod Aug 16 20:04:38.570: INFO: Waiting for pod pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840 to disappear Aug 16 20:04:38.815: INFO: Pod pod-f6bc730f-0f60-4f0b-91e8-5b070b1f2840 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:04:38.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4627" for this suite. • [SLOW TEST:6.490 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:04:38.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 16 20:04:38.973: INFO: Waiting up to 5m0s for pod "downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123" in namespace "downward-api-2434" to be "success or failure" Aug 16 20:04:38.982: INFO: Pod "downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288141ms Aug 16 20:04:40.987: INFO: Pod "downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013759883s Aug 16 20:04:43.108: INFO: Pod "downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134825871s Aug 16 20:04:45.276: INFO: Pod "downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123": Phase="Running", Reason="", readiness=true. Elapsed: 6.302844727s Aug 16 20:04:47.309: INFO: Pod "downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.335487524s STEP: Saw pod success Aug 16 20:04:47.309: INFO: Pod "downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123" satisfied condition "success or failure" Aug 16 20:04:47.916: INFO: Trying to get logs from node jerma-worker pod downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123 container dapi-container: STEP: delete the pod Aug 16 20:04:48.951: INFO: Waiting for pod downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123 to disappear Aug 16 20:04:49.881: INFO: Pod downward-api-0766cf93-b8a9-4a01-9ceb-f3d11ea4f123 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:04:49.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2434" for this suite. • [SLOW TEST:11.571 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":207,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:04:50.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Aug 16 20:04:52.861: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:04:54.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2247" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":13,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:04:54.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-b970e09d-40a1-4678-9e68-e27d4b57084b STEP: Creating a pod to test consume secrets Aug 16 20:04:56.688: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b" in namespace "projected-5128" to be "success or failure" Aug 16 20:04:57.067: INFO: Pod "pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b": Phase="Pending", Reason="", readiness=false. Elapsed: 378.421828ms Aug 16 20:04:59.223: INFO: Pod "pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534847021s Aug 16 20:05:01.265: INFO: Pod "pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576130258s Aug 16 20:05:03.283: INFO: Pod "pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.594698224s STEP: Saw pod success Aug 16 20:05:03.284: INFO: Pod "pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b" satisfied condition "success or failure" Aug 16 20:05:03.290: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b container projected-secret-volume-test: STEP: delete the pod Aug 16 20:05:03.325: INFO: Waiting for pod pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b to disappear Aug 16 20:05:03.353: INFO: Pod pod-projected-secrets-035dd3b5-da24-400a-a150-e75f0572539b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:05:03.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5128" for this suite. • [SLOW TEST:9.164 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":254,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:05:03.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:05:10.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7899" for this suite. • [SLOW TEST:7.580 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":15,"skipped":265,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:05:10.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 16 20:05:17.707: INFO: Successfully updated pod "pod-update-007d6c29-5f52-4ca1-9a10-f0519791d05e" STEP: verifying the updated pod is in kubernetes Aug 16 20:05:17.729: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:05:17.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2278" for this suite. • [SLOW TEST:6.786 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":272,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:05:17.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 20:05:18.070: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1588dfec-f7d4-4f6d-893b-c7268feac964", Controller:(*bool)(0x40028205da), BlockOwnerDeletion:(*bool)(0x40028205db)}} Aug 16 20:05:18.121: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6bad6aff-2e94-4ca5-8897-4ff5c642eff8", Controller:(*bool)(0x400282076a), BlockOwnerDeletion:(*bool)(0x400282076b)}} Aug 16 20:05:18.147: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7f6d9526-42b2-498e-bedf-fbb3e4ad5429", Controller:(*bool)(0x4002820962), BlockOwnerDeletion:(*bool)(0x4002820963)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:05:23.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8563" for this suite. • [SLOW TEST:5.941 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":17,"skipped":294,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:05:23.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5482 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5482 I0816 20:05:27.682542 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5482, replica count: 2 I0816 20:05:30.736746 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:05:33.737787 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:05:36.739198 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:05:39.739679 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 16 20:05:39.740: INFO: Creating new exec pod Aug 16 20:05:47.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5482 execpodt8gsf -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 16 20:05:56.247: INFO: stderr: "I0816 20:05:56.084403 59 log.go:172] (0x4000af2b00) (0x4000a24000) Create stream\nI0816 20:05:56.087828 59 log.go:172] (0x4000af2b00) (0x4000a24000) Stream added, broadcasting: 1\nI0816 20:05:56.096423 59 log.go:172] (0x4000af2b00) Reply frame received for 1\nI0816 20:05:56.097056 59 log.go:172] (0x4000af2b00) (0x4000a04000) Create stream\nI0816 20:05:56.097137 59 log.go:172] (0x4000af2b00) (0x4000a04000) Stream added, broadcasting: 3\nI0816 20:05:56.099078 59 log.go:172] (0x4000af2b00) Reply frame received for 3\nI0816 20:05:56.099508 59 log.go:172] (0x4000af2b00) (0x4000ace000) Create stream\nI0816 20:05:56.099605 59 log.go:172] (0x4000af2b00) (0x4000ace000) Stream added, broadcasting: 5\nI0816 20:05:56.101084 59 log.go:172] (0x4000af2b00) Reply frame received for 5\nI0816 20:05:56.184419 59 log.go:172] (0x4000af2b00) Data frame received for 5\nI0816 20:05:56.184848 59 log.go:172] (0x4000ace000) (5) Data frame handling\nI0816 20:05:56.185372 59 log.go:172] (0x4000ace000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0816 20:05:56.227852 59 log.go:172] (0x4000af2b00) Data frame received for 5\nI0816 20:05:56.227981 59 log.go:172] (0x4000ace000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0816 20:05:56.228106 59 log.go:172] (0x4000af2b00) Data frame received for 3\nI0816 20:05:56.228232 59 log.go:172] (0x4000a04000) (3) Data frame handling\nI0816 20:05:56.228817 59 log.go:172] (0x4000ace000) (5) Data frame sent\nI0816 20:05:56.228931 59 log.go:172] (0x4000af2b00) Data frame received for 5\nI0816 20:05:56.229003 59 log.go:172] (0x4000ace000) (5) Data frame handling\nI0816 20:05:56.229510 59 log.go:172] (0x4000af2b00) Data frame received for 1\nI0816 20:05:56.229618 59 log.go:172] (0x4000a24000) (1) Data frame handling\nI0816 20:05:56.229729 59 log.go:172] (0x4000a24000) (1) Data frame sent\nI0816 20:05:56.231354 59 log.go:172] (0x4000af2b00) (0x4000a24000) Stream removed, broadcasting: 1\nI0816 20:05:56.234725 59 log.go:172] (0x4000af2b00) Go away received\nI0816 20:05:56.236504 59 log.go:172] (0x4000af2b00) (0x4000a24000) Stream removed, broadcasting: 1\nI0816 20:05:56.237201 59 log.go:172] (0x4000af2b00) (0x4000a04000) Stream removed, broadcasting: 3\nI0816 20:05:56.237648 59 log.go:172] (0x4000af2b00) (0x4000ace000) Stream removed, broadcasting: 5\n" Aug 16 20:05:56.248: INFO: stdout: "" Aug 16 20:05:56.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5482 execpodt8gsf -- /bin/sh -x -c nc -zv -t -w 2 10.105.76.191 80' Aug 16 20:05:57.940: INFO: stderr: "I0816 20:05:57.811783 90 log.go:172] (0x4000ae89a0) (0x4000815cc0) Create stream\nI0816 20:05:57.814171 90 log.go:172] (0x4000ae89a0) (0x4000815cc0) Stream added, broadcasting: 1\nI0816 20:05:57.824599 90 log.go:172] (0x4000ae89a0) Reply frame received for 1\nI0816 20:05:57.825610 90 log.go:172] (0x4000ae89a0) (0x4000adc000) Create stream\nI0816 20:05:57.825687 90 log.go:172] (0x4000ae89a0) (0x4000adc000) Stream added, broadcasting: 3\nI0816 20:05:57.827733 90 log.go:172] (0x4000ae89a0) Reply frame received for 3\nI0816 20:05:57.828499 90 log.go:172] (0x4000ae89a0) (0x400050d4a0) Create stream\nI0816 20:05:57.828671 90 log.go:172] (0x4000ae89a0) (0x400050d4a0) Stream added, broadcasting: 5\nI0816 20:05:57.830720 90 log.go:172] (0x4000ae89a0) Reply frame received for 5\nI0816 20:05:57.923997 90 log.go:172] (0x4000ae89a0) Data frame received for 5\nI0816 20:05:57.924262 90 log.go:172] (0x4000ae89a0) Data frame received for 3\nI0816 20:05:57.924391 90 log.go:172] (0x4000adc000) (3) Data frame handling\nI0816 20:05:57.924539 90 log.go:172] (0x400050d4a0) (5) Data frame handling\nI0816 20:05:57.924718 90 log.go:172] (0x4000ae89a0) Data frame received for 1\nI0816 20:05:57.924866 90 log.go:172] (0x4000815cc0) (1) Data frame handling\nI0816 20:05:57.925869 90 log.go:172] (0x4000815cc0) (1) Data frame sent\n+ nc -zv -t -w 2 10.105.76.191 80\nConnection to 10.105.76.191 80 port [tcp/http] succeeded!\nI0816 20:05:57.926202 90 log.go:172] (0x400050d4a0) (5) Data frame sent\nI0816 20:05:57.926664 90 log.go:172] (0x4000ae89a0) Data frame received for 5\nI0816 20:05:57.926723 90 log.go:172] (0x400050d4a0) (5) Data frame handling\nI0816 20:05:57.927603 90 log.go:172] (0x4000ae89a0) (0x4000815cc0) Stream removed, broadcasting: 1\nI0816 20:05:57.928918 90 log.go:172] (0x4000ae89a0) Go away received\nI0816 20:05:57.931797 90 log.go:172] (0x4000ae89a0) (0x4000815cc0) Stream removed, broadcasting: 1\nI0816 20:05:57.932065 90 log.go:172] (0x4000ae89a0) (0x4000adc000) Stream removed, broadcasting: 3\nI0816 20:05:57.932270 90 log.go:172] (0x4000ae89a0) (0x400050d4a0) Stream removed, broadcasting: 5\n" Aug 16 20:05:57.941: INFO: stdout: "" Aug 16 20:05:57.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5482 execpodt8gsf -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 32295' Aug 16 20:05:59.424: INFO: stderr: "I0816 20:05:59.303004 113 log.go:172] (0x4000612000) (0x4000a78000) Create stream\nI0816 20:05:59.308089 113 log.go:172] (0x4000612000) (0x4000a78000) Stream added, broadcasting: 1\nI0816 20:05:59.324961 113 log.go:172] (0x4000612000) Reply frame received for 1\nI0816 20:05:59.326356 113 log.go:172] (0x4000612000) (0x4000a780a0) Create stream\nI0816 20:05:59.326466 113 log.go:172] (0x4000612000) (0x4000a780a0) Stream added, broadcasting: 3\nI0816 20:05:59.328882 113 log.go:172] (0x4000612000) Reply frame received for 3\nI0816 20:05:59.329386 113 log.go:172] (0x4000612000) (0x4000811ae0) Create stream\nI0816 20:05:59.329516 113 log.go:172] (0x4000612000) (0x4000811ae0) Stream added, broadcasting: 5\nI0816 20:05:59.331669 113 log.go:172] (0x4000612000) Reply frame received for 5\nI0816 20:05:59.402296 113 log.go:172] (0x4000612000) Data frame received for 5\nI0816 20:05:59.402717 113 log.go:172] (0x4000612000) Data frame received for 3\nI0816 20:05:59.402872 113 log.go:172] (0x4000a780a0) (3) Data frame handling\nI0816 20:05:59.402959 113 log.go:172] (0x4000811ae0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 32295\nI0816 20:05:59.404842 113 log.go:172] (0x4000811ae0) (5) Data frame sent\nI0816 20:05:59.405034 113 log.go:172] (0x4000612000) Data frame received for 5\nI0816 20:05:59.405123 113 log.go:172] (0x4000811ae0) (5) Data frame handling\nI0816 20:05:59.405190 113 log.go:172] (0x4000811ae0) (5) Data frame sent\nI0816 20:05:59.405245 113 log.go:172] (0x4000612000) Data frame received for 5\nConnection to 172.18.0.6 32295 port [tcp/32295] succeeded!\nI0816 20:05:59.405310 113 log.go:172] (0x4000811ae0) (5) Data frame handling\nI0816 20:05:59.406201 113 log.go:172] (0x4000612000) Data frame received for 1\nI0816 20:05:59.406262 113 log.go:172] (0x4000a78000) (1) Data frame handling\nI0816 20:05:59.406325 113 log.go:172] (0x4000a78000) (1) Data frame sent\nI0816 20:05:59.407614 113 log.go:172] (0x4000612000) (0x4000a78000) Stream removed, broadcasting: 1\nI0816 20:05:59.410387 113 log.go:172] (0x4000612000) Go away received\nI0816 20:05:59.412668 113 log.go:172] (0x4000612000) (0x4000a78000) Stream removed, broadcasting: 1\nI0816 20:05:59.413215 113 log.go:172] (0x4000612000) (0x4000a780a0) Stream removed, broadcasting: 3\nI0816 20:05:59.413374 113 log.go:172] (0x4000612000) (0x4000811ae0) Stream removed, broadcasting: 5\n" Aug 16 20:05:59.425: INFO: stdout: "" Aug 16 20:05:59.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5482 execpodt8gsf -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 32295' Aug 16 20:06:01.095: INFO: stderr: "I0816 20:06:00.950508 136 log.go:172] (0x4000a4c000) (0x4000b70000) Create stream\nI0816 20:06:00.957201 136 log.go:172] (0x4000a4c000) (0x4000b70000) Stream added, broadcasting: 1\nI0816 20:06:00.967145 136 log.go:172] (0x4000a4c000) Reply frame received for 1\nI0816 20:06:00.967961 136 log.go:172] (0x4000a4c000) (0x4000952000) Create stream\nI0816 20:06:00.968073 136 log.go:172] (0x4000a4c000) (0x4000952000) Stream added, broadcasting: 3\nI0816 20:06:00.970396 136 log.go:172] (0x4000a4c000) Reply frame received for 3\nI0816 20:06:00.970934 136 log.go:172] (0x4000a4c000) (0x4000b700a0) Create stream\nI0816 20:06:00.971098 136 log.go:172] (0x4000a4c000) (0x4000b700a0) Stream added, broadcasting: 5\nI0816 20:06:00.972523 136 log.go:172] (0x4000a4c000) Reply frame received for 5\nI0816 20:06:01.077000 136 log.go:172] (0x4000a4c000) Data frame received for 5\nI0816 20:06:01.077422 136 log.go:172] (0x4000a4c000) Data frame received for 3\nI0816 20:06:01.077705 136 log.go:172] (0x4000952000) (3) Data frame handling\nI0816 20:06:01.077950 136 log.go:172] (0x4000b700a0) (5) Data frame handling\nI0816 20:06:01.078430 136 log.go:172] (0x4000a4c000) Data frame received for 1\nI0816 20:06:01.078602 136 log.go:172] (0x4000b70000) (1) Data frame handling\nI0816 20:06:01.079869 136 log.go:172] (0x4000b700a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 32295\nConnection to 172.18.0.3 32295 port [tcp/32295] succeeded!\nI0816 20:06:01.080450 136 log.go:172] (0x4000a4c000) Data frame received for 5\nI0816 20:06:01.080535 136 log.go:172] (0x4000b700a0) (5) Data frame handling\nI0816 20:06:01.081354 136 log.go:172] (0x4000b70000) (1) Data frame sent\nI0816 20:06:01.082429 136 log.go:172] (0x4000a4c000) (0x4000b70000) Stream removed, broadcasting: 1\nI0816 20:06:01.083585 136 log.go:172] (0x4000a4c000) Go away received\nI0816 20:06:01.086015 136 log.go:172] (0x4000a4c000) (0x4000b70000) Stream removed, broadcasting: 1\nI0816 20:06:01.086478 136 log.go:172] (0x4000a4c000) (0x4000952000) Stream removed, broadcasting: 3\nI0816 20:06:01.086674 136 log.go:172] (0x4000a4c000) (0x4000b700a0) Stream removed, broadcasting: 5\n" Aug 16 20:06:01.096: INFO: stdout: "" Aug 16 20:06:01.096: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:06:01.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5482" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:38.043 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":18,"skipped":301,"failed":0} [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:06:01.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 16 20:06:02.078: INFO: Waiting up to 5m0s for pod "downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a" in namespace "downward-api-7664" to be "success or failure" Aug 16 20:06:02.294: INFO: Pod "downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a": Phase="Pending", Reason="", readiness=false. Elapsed: 215.099258ms Aug 16 20:06:05.062: INFO: Pod "downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983666792s Aug 16 20:06:07.262: INFO: Pod "downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.183127215s Aug 16 20:06:09.639: INFO: Pod "downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.560205437s STEP: Saw pod success Aug 16 20:06:09.639: INFO: Pod "downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a" satisfied condition "success or failure" Aug 16 20:06:09.722: INFO: Trying to get logs from node jerma-worker2 pod downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a container dapi-container: STEP: delete the pod Aug 16 20:06:09.791: INFO: Waiting for pod downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a to disappear Aug 16 20:06:09.817: INFO: Pod downward-api-6e70875d-ff8b-47c9-abe9-7fdf0724440a no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:06:09.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7664" for this suite. • [SLOW TEST:8.179 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":301,"failed":0} SS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:06:09.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:06:56.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1918" for this suite. • [SLOW TEST:46.204 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":20,"skipped":303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:06:56.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 16 20:06:56.954: INFO: Waiting up to 5m0s for pod "pod-875cbf43-8edf-4d75-b24b-fa3190843dfe" in namespace "emptydir-349" to be "success or failure" Aug 16 20:06:57.086: INFO: Pod "pod-875cbf43-8edf-4d75-b24b-fa3190843dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 131.503372ms Aug 16 20:06:59.207: INFO: Pod "pod-875cbf43-8edf-4d75-b24b-fa3190843dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252292718s Aug 16 20:07:01.211: INFO: Pod "pod-875cbf43-8edf-4d75-b24b-fa3190843dfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2568395s STEP: Saw pod success Aug 16 20:07:01.211: INFO: Pod "pod-875cbf43-8edf-4d75-b24b-fa3190843dfe" satisfied condition "success or failure" Aug 16 20:07:01.248: INFO: Trying to get logs from node jerma-worker2 pod pod-875cbf43-8edf-4d75-b24b-fa3190843dfe container test-container: STEP: delete the pod Aug 16 20:07:01.802: INFO: Waiting for pod pod-875cbf43-8edf-4d75-b24b-fa3190843dfe to disappear Aug 16 20:07:01.816: INFO: Pod pod-875cbf43-8edf-4d75-b24b-fa3190843dfe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:07:01.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-349" for this suite. • [SLOW TEST:5.903 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":327,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:07:02.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 16 20:07:02.314: INFO: Waiting up to 5m0s for pod "pod-a32819ed-286c-40dd-999f-319a866fdc52" in namespace "emptydir-7484" to be "success or failure" Aug 16 20:07:02.385: INFO: Pod "pod-a32819ed-286c-40dd-999f-319a866fdc52": Phase="Pending", Reason="", readiness=false. Elapsed: 70.524315ms Aug 16 20:07:04.585: INFO: Pod "pod-a32819ed-286c-40dd-999f-319a866fdc52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270413601s Aug 16 20:07:06.591: INFO: Pod "pod-a32819ed-286c-40dd-999f-319a866fdc52": Phase="Running", Reason="", readiness=true. Elapsed: 4.276384365s Aug 16 20:07:08.597: INFO: Pod "pod-a32819ed-286c-40dd-999f-319a866fdc52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.282071361s STEP: Saw pod success Aug 16 20:07:08.597: INFO: Pod "pod-a32819ed-286c-40dd-999f-319a866fdc52" satisfied condition "success or failure" Aug 16 20:07:08.600: INFO: Trying to get logs from node jerma-worker pod pod-a32819ed-286c-40dd-999f-319a866fdc52 container test-container: STEP: delete the pod Aug 16 20:07:08.699: INFO: Waiting for pod pod-a32819ed-286c-40dd-999f-319a866fdc52 to disappear Aug 16 20:07:08.716: INFO: Pod pod-a32819ed-286c-40dd-999f-319a866fdc52 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:07:08.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7484" for this suite. • [SLOW TEST:6.704 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":329,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:07:08.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Aug 16 20:07:08.793: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:08:55.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8022" for this suite. • [SLOW TEST:107.629 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":23,"skipped":350,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:08:56.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-939, will wait for the garbage collector to delete the pods Aug 16 20:09:03.250: INFO: Deleting Job.batch foo took: 9.740675ms Aug 16 20:09:03.651: INFO: Terminating Job.batch foo pods took: 401.553187ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:09:41.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-939" for this suite. • [SLOW TEST:45.334 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":24,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:09:41.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 16 20:09:49.342: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:09:49.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9708" for this suite. • [SLOW TEST:7.803 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":393,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:09:49.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-db246232-bfe8-4e9f-9a92-e95828847ba3 STEP: Creating a pod to test consume configMaps Aug 16 20:09:49.714: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f" in namespace "projected-736" to be "success or failure" Aug 16 20:09:49.743: INFO: Pod "pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.54522ms Aug 16 20:09:51.747: INFO: Pod "pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033070172s Aug 16 20:09:53.753: INFO: Pod "pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039068082s Aug 16 20:09:55.761: INFO: Pod "pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f": Phase="Running", Reason="", readiness=true. Elapsed: 6.047136608s Aug 16 20:09:57.843: INFO: Pod "pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128978819s STEP: Saw pod success Aug 16 20:09:57.844: INFO: Pod "pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f" satisfied condition "success or failure" Aug 16 20:09:57.850: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f container projected-configmap-volume-test: STEP: delete the pod Aug 16 20:09:57.942: INFO: Waiting for pod pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f to disappear Aug 16 20:09:57.991: INFO: Pod pod-projected-configmaps-1e8f0ae2-28a9-461d-8977-93544f51481f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:09:57.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-736" for this suite. • [SLOW TEST:8.506 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":398,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:09:58.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-4b9cda34-e11a-48da-8b18-d280ea32f907 STEP: Creating a pod to test consume secrets Aug 16 20:09:58.091: INFO: Waiting up to 5m0s for pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03" in namespace "secrets-4165" to be "success or failure" Aug 16 20:09:58.124: INFO: Pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03": Phase="Pending", Reason="", readiness=false. Elapsed: 32.975103ms Aug 16 20:10:00.153: INFO: Pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062691105s Aug 16 20:10:02.184: INFO: Pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092962508s Aug 16 20:10:04.387: INFO: Pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296098543s Aug 16 20:10:06.597: INFO: Pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50612913s Aug 16 20:10:08.603: INFO: Pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03": Phase="Pending", Reason="", readiness=false. Elapsed: 10.512369173s Aug 16 20:10:10.807: INFO: Pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.715969625s STEP: Saw pod success Aug 16 20:10:10.807: INFO: Pod "pod-secrets-48cb152a-c592-4b58-8096-429035222e03" satisfied condition "success or failure" Aug 16 20:10:10.811: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-48cb152a-c592-4b58-8096-429035222e03 container secret-volume-test: STEP: delete the pod Aug 16 20:10:10.843: INFO: Waiting for pod pod-secrets-48cb152a-c592-4b58-8096-429035222e03 to disappear Aug 16 20:10:11.387: INFO: Pod pod-secrets-48cb152a-c592-4b58-8096-429035222e03 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:10:11.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4165" for this suite. • [SLOW TEST:13.396 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":413,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:10:11.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-35a5d417-314b-40b6-b57b-ec00ef69098e STEP: Creating a pod to test consume secrets Aug 16 20:10:12.193: INFO: Waiting up to 5m0s for pod "pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06" in namespace "secrets-9318" to be "success or failure" Aug 16 20:10:12.264: INFO: Pod "pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06": Phase="Pending", Reason="", readiness=false. Elapsed: 70.097209ms Aug 16 20:10:14.268: INFO: Pod "pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074699556s Aug 16 20:10:16.401: INFO: Pod "pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207364211s Aug 16 20:10:18.461: INFO: Pod "pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.267039677s STEP: Saw pod success Aug 16 20:10:18.461: INFO: Pod "pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06" satisfied condition "success or failure" Aug 16 20:10:18.465: INFO: Trying to get logs from node jerma-worker pod pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06 container secret-volume-test: STEP: delete the pod Aug 16 20:10:18.746: INFO: Waiting for pod pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06 to disappear Aug 16 20:10:18.760: INFO: Pod pod-secrets-335efbdd-0d2c-4d48-aee1-eb78df1ccb06 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:10:18.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9318" for this suite. • [SLOW TEST:7.478 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":417,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:10:18.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 20:10:22.812: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 20:10:25.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205422, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205422, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205423, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205422, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 20:10:27.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205422, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205422, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205423, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205422, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 20:10:30.154: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:10:30.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-931" for this suite. STEP: Destroying namespace "webhook-931-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.213 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":29,"skipped":420,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:10:31.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:10:32.493: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1" in namespace "projected-7868" to be "success or failure" Aug 16 20:10:32.538: INFO: Pod "downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1": Phase="Pending", Reason="", readiness=false. Elapsed: 44.547825ms Aug 16 20:10:34.543: INFO: Pod "downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050127506s Aug 16 20:10:36.910: INFO: Pod "downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416415342s Aug 16 20:10:38.951: INFO: Pod "downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.45788812s STEP: Saw pod success Aug 16 20:10:38.951: INFO: Pod "downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1" satisfied condition "success or failure" Aug 16 20:10:39.020: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1 container client-container: STEP: delete the pod Aug 16 20:10:39.652: INFO: Waiting for pod downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1 to disappear Aug 16 20:10:39.677: INFO: Pod downwardapi-volume-f7a9568f-7680-45ab-a7b8-5156e69349d1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:10:39.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7868" for this suite. • [SLOW TEST:8.596 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":435,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:10:39.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5581 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 16 20:10:40.111: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 16 20:11:06.660: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.119:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5581 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 20:11:06.661: INFO: >>> kubeConfig: /root/.kube/config I0816 20:11:06.727373 7 log.go:172] (0x4002efc2c0) (0x4001135680) Create stream I0816 20:11:06.728958 7 log.go:172] (0x4002efc2c0) (0x4001135680) Stream added, broadcasting: 1 I0816 20:11:06.749688 7 log.go:172] (0x4002efc2c0) Reply frame received for 1 I0816 20:11:06.750852 7 log.go:172] (0x4002efc2c0) (0x400282e000) Create stream I0816 20:11:06.750970 7 log.go:172] (0x4002efc2c0) (0x400282e000) Stream added, broadcasting: 3 I0816 20:11:06.753524 7 log.go:172] (0x4002efc2c0) Reply frame received for 3 I0816 20:11:06.753798 7 log.go:172] (0x4002efc2c0) (0x4001135720) Create stream I0816 20:11:06.753867 7 log.go:172] (0x4002efc2c0) (0x4001135720) Stream added, broadcasting: 5 I0816 20:11:06.755514 7 log.go:172] (0x4002efc2c0) Reply frame received for 5 I0816 20:11:06.825716 7 log.go:172] (0x4002efc2c0) Data frame received for 5 I0816 20:11:06.826079 7 log.go:172] (0x4001135720) (5) Data frame handling I0816 20:11:06.826353 7 log.go:172] (0x4002efc2c0) Data frame received for 1 I0816 20:11:06.826535 7 log.go:172] (0x4001135680) (1) Data frame handling I0816 20:11:06.826741 7 log.go:172] (0x4002efc2c0) Data frame received for 3 I0816 20:11:06.826855 7 log.go:172] (0x400282e000) (3) Data frame handling I0816 20:11:06.827918 7 log.go:172] (0x4001135680) (1) Data frame sent I0816 20:11:06.828115 7 log.go:172] (0x400282e000) (3) Data frame sent I0816 20:11:06.828215 7 log.go:172] (0x4002efc2c0) Data frame received for 3 I0816 20:11:06.828291 7 log.go:172] (0x400282e000) (3) Data frame handling I0816 20:11:06.829905 7 log.go:172] (0x4002efc2c0) (0x4001135680) Stream removed, broadcasting: 1 I0816 20:11:06.831211 7 log.go:172] (0x4002efc2c0) Go away received I0816 20:11:06.833074 7 log.go:172] (0x4002efc2c0) (0x4001135680) Stream removed, broadcasting: 1 I0816 20:11:06.833369 7 log.go:172] (0x4002efc2c0) (0x400282e000) Stream removed, broadcasting: 3 I0816 20:11:06.833604 7 log.go:172] (0x4002efc2c0) (0x4001135720) Stream removed, broadcasting: 5 Aug 16 20:11:06.834: INFO: Found all expected endpoints: [netserver-0] Aug 16 20:11:06.839: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.157:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5581 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 20:11:06.839: INFO: >>> kubeConfig: /root/.kube/config I0816 20:11:06.896921 7 log.go:172] (0x4005cb04d0) (0x40020a6960) Create stream I0816 20:11:06.897079 7 log.go:172] (0x4005cb04d0) (0x40020a6960) Stream added, broadcasting: 1 I0816 20:11:06.901390 7 log.go:172] (0x4005cb04d0) Reply frame received for 1 I0816 20:11:06.901678 7 log.go:172] (0x4005cb04d0) (0x400165c320) Create stream I0816 20:11:06.901826 7 log.go:172] (0x4005cb04d0) (0x400165c320) Stream added, broadcasting: 3 I0816 20:11:06.903857 7 log.go:172] (0x4005cb04d0) Reply frame received for 3 I0816 20:11:06.904054 7 log.go:172] (0x4005cb04d0) (0x40020a6aa0) Create stream I0816 20:11:06.904166 7 log.go:172] (0x4005cb04d0) (0x40020a6aa0) Stream added, broadcasting: 5 I0816 20:11:06.906198 7 log.go:172] (0x4005cb04d0) Reply frame received for 5 I0816 20:11:06.965873 7 log.go:172] (0x4005cb04d0) Data frame received for 5 I0816 20:11:06.966053 7 log.go:172] (0x40020a6aa0) (5) Data frame handling I0816 20:11:06.966294 7 log.go:172] (0x4005cb04d0) Data frame received for 3 I0816 20:11:06.966514 7 log.go:172] (0x400165c320) (3) Data frame handling I0816 20:11:06.966682 7 log.go:172] (0x400165c320) (3) Data frame sent I0816 20:11:06.966860 7 log.go:172] (0x4005cb04d0) Data frame received for 3 I0816 20:11:06.967112 7 log.go:172] (0x400165c320) (3) Data frame handling I0816 20:11:06.985234 7 log.go:172] (0x4005cb04d0) Data frame received for 1 I0816 20:11:06.985348 7 log.go:172] (0x40020a6960) (1) Data frame handling I0816 20:11:06.985414 7 log.go:172] (0x40020a6960) (1) Data frame sent I0816 20:11:06.985487 7 log.go:172] (0x4005cb04d0) (0x40020a6960) Stream removed, broadcasting: 1 I0816 20:11:06.985579 7 log.go:172] (0x4005cb04d0) Go away received I0816 20:11:06.986005 7 log.go:172] (0x4005cb04d0) (0x40020a6960) Stream removed, broadcasting: 1 I0816 20:11:06.986146 7 log.go:172] (0x4005cb04d0) (0x400165c320) Stream removed, broadcasting: 3 I0816 20:11:06.986223 7 log.go:172] (0x4005cb04d0) (0x40020a6aa0) Stream removed, broadcasting: 5 Aug 16 20:11:06.986: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:11:06.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5581" for this suite. • [SLOW TEST:27.309 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:11:07.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 16 20:11:07.134: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:11:21.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3439" for this suite. • [SLOW TEST:14.588 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":480,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:11:21.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0816 20:11:32.643476 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 16 20:11:32.644: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:11:32.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9883" for this suite. • [SLOW TEST:11.060 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":33,"skipped":485,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:11:32.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-986/secret-test-dd358f50-07d0-48be-b75b-3e94c0d846ce STEP: Creating a pod to test consume secrets Aug 16 20:11:33.322: INFO: Waiting up to 5m0s for pod "pod-configmaps-40fb19c7-845a-4c46-b9ca-f093a8314bc6" in namespace "secrets-986" to be "success or failure" Aug 16 20:11:33.802: INFO: Pod "pod-configmaps-40fb19c7-845a-4c46-b9ca-f093a8314bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 479.521477ms Aug 16 20:11:35.808: INFO: Pod "pod-configmaps-40fb19c7-845a-4c46-b9ca-f093a8314bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485812763s Aug 16 20:11:37.813: INFO: Pod "pod-configmaps-40fb19c7-845a-4c46-b9ca-f093a8314bc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.490937962s STEP: Saw pod success Aug 16 20:11:37.813: INFO: Pod "pod-configmaps-40fb19c7-845a-4c46-b9ca-f093a8314bc6" satisfied condition "success or failure" Aug 16 20:11:37.820: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-40fb19c7-845a-4c46-b9ca-f093a8314bc6 container env-test: STEP: delete the pod Aug 16 20:11:37.864: INFO: Waiting for pod pod-configmaps-40fb19c7-845a-4c46-b9ca-f093a8314bc6 to disappear Aug 16 20:11:37.870: INFO: Pod pod-configmaps-40fb19c7-845a-4c46-b9ca-f093a8314bc6 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:11:37.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-986" for this suite. • [SLOW TEST:5.311 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:11:37.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-mwg29 in namespace proxy-5744 I0816 20:11:38.195315 7 runners.go:189] Created replication controller with name: proxy-service-mwg29, namespace: proxy-5744, replica count: 1 I0816 20:11:39.246749 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:11:40.247501 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:11:41.248140 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:11:42.248877 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:11:43.249636 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0816 20:11:44.250337 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0816 20:11:45.251122 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0816 20:11:46.251930 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0816 20:11:47.252535 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0816 20:11:48.253191 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0816 20:11:49.253707 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0816 20:11:50.254295 7 runners.go:189] proxy-service-mwg29 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 16 20:11:50.266: INFO: setup took 12.150214671s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 16 20:11:50.278: INFO: (0) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 10.450647ms) Aug 16 20:11:50.278: INFO: (0) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 11.038792ms) Aug 16 20:11:50.279: INFO: (0) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 11.317013ms) Aug 16 20:11:50.279: INFO: (0) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 11.15263ms) Aug 16 20:11:50.279: INFO: (0) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 11.221773ms) Aug 16 20:11:50.282: INFO: (0) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 14.679411ms) Aug 16 20:11:50.282: INFO: (0) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 14.863754ms) Aug 16 20:11:50.282: INFO: (0) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 14.224598ms) Aug 16 20:11:50.282: INFO: (0) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 14.692509ms) Aug 16 20:11:50.284: INFO: (0) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 15.766881ms) Aug 16 20:11:50.284: INFO: (0) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 16.321043ms) Aug 16 20:11:50.288: INFO: (0) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 21.023438ms) Aug 16 20:11:50.288: INFO: (0) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 21.170583ms) Aug 16 20:11:50.288: INFO: (0) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 21.296292ms) Aug 16 20:11:50.288: INFO: (0) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 21.049686ms) Aug 16 20:11:50.289: INFO: (0) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 5.011956ms) Aug 16 20:11:50.295: INFO: (1) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 4.979166ms) Aug 16 20:11:50.295: INFO: (1) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 5.772229ms) Aug 16 20:11:50.296: INFO: (1) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 6.310654ms) Aug 16 20:11:50.296: INFO: (1) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 6.40448ms) Aug 16 20:11:50.296: INFO: (1) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 6.44409ms) Aug 16 20:11:50.296: INFO: (1) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 6.7393ms) Aug 16 20:11:50.297: INFO: (1) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 6.82498ms) Aug 16 20:11:50.297: INFO: (1) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 6.822852ms) Aug 16 20:11:50.297: INFO: (1) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 7.119655ms) Aug 16 20:11:50.297: INFO: (1) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 7.078913ms) Aug 16 20:11:50.297: INFO: (1) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 6.986546ms) Aug 16 20:11:50.298: INFO: (1) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 8.32048ms) Aug 16 20:11:50.298: INFO: (1) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: test<... (200; 3.838509ms) Aug 16 20:11:50.304: INFO: (2) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 5.087938ms) Aug 16 20:11:50.304: INFO: (2) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 5.242452ms) Aug 16 20:11:50.305: INFO: (2) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 6.053875ms) Aug 16 20:11:50.305: INFO: (2) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 6.55274ms) Aug 16 20:11:50.306: INFO: (2) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 6.430399ms) Aug 16 20:11:50.306: INFO: (2) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 7.061588ms) Aug 16 20:11:50.306: INFO: (2) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 6.831099ms) Aug 16 20:11:50.306: INFO: (2) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 7.017121ms) Aug 16 20:11:50.306: INFO: (2) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 7.259883ms) Aug 16 20:11:50.307: INFO: (2) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 7.574464ms) Aug 16 20:11:50.307: INFO: (2) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 7.374231ms) Aug 16 20:11:50.311: INFO: (3) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 3.905151ms) Aug 16 20:11:50.311: INFO: (3) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 5.645157ms) Aug 16 20:11:50.313: INFO: (3) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 5.979173ms) Aug 16 20:11:50.313: INFO: (3) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 5.863419ms) Aug 16 20:11:50.313: INFO: (3) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 5.912913ms) Aug 16 20:11:50.313: INFO: (3) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 6.258555ms) Aug 16 20:11:50.313: INFO: (3) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 6.630497ms) Aug 16 20:11:50.313: INFO: (3) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 6.407389ms) Aug 16 20:11:50.314: INFO: (3) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 6.375714ms) Aug 16 20:11:50.314: INFO: (3) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 7.126688ms) Aug 16 20:11:50.314: INFO: (3) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 6.735237ms) Aug 16 20:11:50.317: INFO: (4) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 2.542638ms) Aug 16 20:11:50.319: INFO: (4) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 3.669664ms) Aug 16 20:11:50.319: INFO: (4) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 3.411579ms) Aug 16 20:11:50.319: INFO: (4) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 4.446978ms) Aug 16 20:11:50.319: INFO: (4) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 3.250816ms) Aug 16 20:11:50.321: INFO: (4) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 4.309879ms) Aug 16 20:11:50.321: INFO: (4) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 5.561946ms) Aug 16 20:11:50.321: INFO: (4) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 3.983949ms) Aug 16 20:11:50.321: INFO: (4) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 4.098585ms) Aug 16 20:11:50.321: INFO: (4) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 4.727267ms) Aug 16 20:11:50.321: INFO: (4) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 4.767984ms) Aug 16 20:11:50.321: INFO: (4) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: test<... (200; 6.364726ms) Aug 16 20:11:50.330: INFO: (5) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 6.291859ms) Aug 16 20:11:50.331: INFO: (5) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 6.920573ms) Aug 16 20:11:50.331: INFO: (5) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 7.117943ms) Aug 16 20:11:50.330: INFO: (5) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: test (200; 4.811349ms) Aug 16 20:11:50.336: INFO: (6) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 4.676321ms) Aug 16 20:11:50.336: INFO: (6) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 5.215723ms) Aug 16 20:11:50.337: INFO: (6) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 5.429734ms) Aug 16 20:11:50.337: INFO: (6) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 5.913809ms) Aug 16 20:11:50.337: INFO: (6) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 6.02075ms) Aug 16 20:11:50.337: INFO: (6) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 5.949923ms) Aug 16 20:11:50.337: INFO: (6) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 6.428985ms) Aug 16 20:11:50.338: INFO: (6) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 6.92647ms) Aug 16 20:11:50.338: INFO: (6) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 7.219166ms) Aug 16 20:11:50.347: INFO: (7) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 7.600662ms) Aug 16 20:11:50.347: INFO: (7) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 8.373935ms) Aug 16 20:11:50.347: INFO: (7) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 8.512596ms) Aug 16 20:11:50.347: INFO: (7) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 8.203741ms) Aug 16 20:11:50.347: INFO: (7) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 8.437411ms) Aug 16 20:11:50.347: INFO: (7) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 8.342783ms) Aug 16 20:11:50.347: INFO: (7) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 8.532481ms) Aug 16 20:11:50.348: INFO: (7) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 8.931317ms) Aug 16 20:11:50.348: INFO: (7) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 9.165707ms) Aug 16 20:11:50.348: INFO: (7) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 9.116687ms) Aug 16 20:11:50.348: INFO: (7) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 9.227927ms) Aug 16 20:11:50.348: INFO: (7) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 8.955101ms) Aug 16 20:11:50.348: INFO: (7) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 9.16796ms) Aug 16 20:11:50.348: INFO: (7) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 9.710306ms) Aug 16 20:11:50.348: INFO: (7) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 9.358511ms) Aug 16 20:11:50.353: INFO: (8) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 4.505257ms) Aug 16 20:11:50.356: INFO: (8) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 7.19442ms) Aug 16 20:11:50.356: INFO: (8) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 7.808991ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 7.941371ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 8.002907ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 8.11681ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 8.32698ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 8.340756ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 8.244052ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 8.729029ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 8.715413ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 8.520104ms) Aug 16 20:11:50.357: INFO: (8) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 8.742075ms) Aug 16 20:11:50.358: INFO: (8) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: test<... (200; 6.284921ms) Aug 16 20:11:50.373: INFO: (9) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 6.593102ms) Aug 16 20:11:50.373: INFO: (9) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 7.341706ms) Aug 16 20:11:50.373: INFO: (9) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 7.577817ms) Aug 16 20:11:50.373: INFO: (9) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 7.752083ms) Aug 16 20:11:50.374: INFO: (9) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 7.574381ms) Aug 16 20:11:50.374: INFO: (9) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 7.818084ms) Aug 16 20:11:50.374: INFO: (9) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 7.954036ms) Aug 16 20:11:50.374: INFO: (9) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 8.094849ms) Aug 16 20:11:50.374: INFO: (9) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 8.227161ms) Aug 16 20:11:50.374: INFO: (9) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 8.345173ms) Aug 16 20:11:50.379: INFO: (10) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 4.367505ms) Aug 16 20:11:50.379: INFO: (10) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 4.734804ms) Aug 16 20:11:50.379: INFO: (10) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 4.125479ms) Aug 16 20:11:50.380: INFO: (10) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 5.514278ms) Aug 16 20:11:50.380: INFO: (10) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 5.612287ms) Aug 16 20:11:50.380: INFO: (10) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 5.054985ms) Aug 16 20:11:50.380: INFO: (10) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 6.098227ms) Aug 16 20:11:50.380: INFO: (10) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 6.13296ms) Aug 16 20:11:50.381: INFO: (10) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 6.312351ms) Aug 16 20:11:50.381: INFO: (10) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 6.072102ms) Aug 16 20:11:50.381: INFO: (10) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 6.496056ms) Aug 16 20:11:50.381: INFO: (10) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 6.450025ms) Aug 16 20:11:50.381: INFO: (10) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 6.755204ms) Aug 16 20:11:50.383: INFO: (10) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 8.617188ms) Aug 16 20:11:50.388: INFO: (11) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 4.272199ms) Aug 16 20:11:50.388: INFO: (11) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 4.277801ms) Aug 16 20:11:50.389: INFO: (11) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 5.489271ms) Aug 16 20:11:50.389: INFO: (11) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 5.587786ms) Aug 16 20:11:50.391: INFO: (11) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 7.22151ms) Aug 16 20:11:50.391: INFO: (11) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 7.290742ms) Aug 16 20:11:50.391: INFO: (11) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 9.84946ms) Aug 16 20:11:50.397: INFO: (12) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: test (200; 4.829921ms) Aug 16 20:11:50.399: INFO: (12) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 4.881758ms) Aug 16 20:11:50.399: INFO: (12) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 5.174877ms) Aug 16 20:11:50.399: INFO: (12) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 5.237801ms) Aug 16 20:11:50.399: INFO: (12) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 5.566591ms) Aug 16 20:11:50.400: INFO: (12) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 5.622789ms) Aug 16 20:11:50.400: INFO: (12) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 5.888968ms) Aug 16 20:11:50.400: INFO: (12) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 5.958145ms) Aug 16 20:11:50.400: INFO: (12) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 6.210286ms) Aug 16 20:11:50.401: INFO: (12) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 6.511663ms) Aug 16 20:11:50.401: INFO: (12) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 6.784408ms) Aug 16 20:11:50.401: INFO: (12) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 7.292481ms) Aug 16 20:11:50.401: INFO: (12) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 7.252281ms) Aug 16 20:11:50.401: INFO: (12) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 7.307447ms) Aug 16 20:11:50.401: INFO: (12) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 7.689059ms) Aug 16 20:11:50.406: INFO: (13) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 4.92532ms) Aug 16 20:11:50.407: INFO: (13) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 5.161736ms) Aug 16 20:11:50.407: INFO: (13) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 5.596048ms) Aug 16 20:11:50.408: INFO: (13) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 5.821628ms) Aug 16 20:11:50.408: INFO: (13) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 5.94583ms) Aug 16 20:11:50.408: INFO: (13) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 6.230676ms) Aug 16 20:11:50.408: INFO: (13) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 6.389006ms) Aug 16 20:11:50.408: INFO: (13) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 6.537614ms) Aug 16 20:11:50.409: INFO: (13) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 6.791093ms) Aug 16 20:11:50.409: INFO: (13) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 6.769122ms) Aug 16 20:11:50.412: INFO: (14) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 2.971298ms) Aug 16 20:11:50.413: INFO: (14) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 4.290533ms) Aug 16 20:11:50.414: INFO: (14) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 4.887883ms) Aug 16 20:11:50.414: INFO: (14) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 5.391206ms) Aug 16 20:11:50.414: INFO: (14) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 5.302162ms) Aug 16 20:11:50.414: INFO: (14) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 5.528769ms) Aug 16 20:11:50.414: INFO: (14) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 5.643936ms) Aug 16 20:11:50.415: INFO: (14) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 5.776677ms) Aug 16 20:11:50.415: INFO: (14) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 5.704619ms) Aug 16 20:11:50.415: INFO: (14) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 5.545716ms) Aug 16 20:11:50.415: INFO: (14) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 5.891688ms) Aug 16 20:11:50.415: INFO: (14) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 6.242304ms) Aug 16 20:11:50.415: INFO: (14) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 6.131533ms) Aug 16 20:11:50.415: INFO: (14) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 6.205433ms) Aug 16 20:11:50.415: INFO: (14) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 6.264487ms) Aug 16 20:11:50.416: INFO: (14) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: test (200; 5.594082ms) Aug 16 20:11:50.422: INFO: (15) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 5.963414ms) Aug 16 20:11:50.422: INFO: (15) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 6.073163ms) Aug 16 20:11:50.422: INFO: (15) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 6.026237ms) Aug 16 20:11:50.422: INFO: (15) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 6.474147ms) Aug 16 20:11:50.422: INFO: (15) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 6.595521ms) Aug 16 20:11:50.423: INFO: (15) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 6.718961ms) Aug 16 20:11:50.427: INFO: (15) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 11.445556ms) Aug 16 20:11:50.428: INFO: (15) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 11.64017ms) Aug 16 20:11:50.428: INFO: (15) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 11.919558ms) Aug 16 20:11:50.428: INFO: (15) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 12.095222ms) Aug 16 20:11:50.428: INFO: (15) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 12.251845ms) Aug 16 20:11:50.428: INFO: (15) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 12.295864ms) Aug 16 20:11:50.429: INFO: (15) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 12.804842ms) Aug 16 20:11:50.434: INFO: (16) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 5.049898ms) Aug 16 20:11:50.434: INFO: (16) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 5.140322ms) Aug 16 20:11:50.435: INFO: (16) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 5.382184ms) Aug 16 20:11:50.436: INFO: (16) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 6.311733ms) Aug 16 20:11:50.436: INFO: (16) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 6.472693ms) Aug 16 20:11:50.436: INFO: (16) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 6.887761ms) Aug 16 20:11:50.437: INFO: (16) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 7.719485ms) Aug 16 20:11:50.437: INFO: (16) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 8.100793ms) Aug 16 20:11:50.437: INFO: (16) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 8.183431ms) Aug 16 20:11:50.437: INFO: (16) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 7.383072ms) Aug 16 20:11:50.438: INFO: (16) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 8.096766ms) Aug 16 20:11:50.438: INFO: (16) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 8.758191ms) Aug 16 20:11:50.444: INFO: (17) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 6.312794ms) Aug 16 20:11:50.445: INFO: (17) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 6.475534ms) Aug 16 20:11:50.445: INFO: (17) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname1/proxy/: foo (200; 6.998497ms) Aug 16 20:11:50.445: INFO: (17) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:1080/proxy/: test<... (200; 6.500945ms) Aug 16 20:11:50.445: INFO: (17) /api/v1/namespaces/proxy-5744/services/http:proxy-service-mwg29:portname2/proxy/: bar (200; 7.568797ms) Aug 16 20:11:50.445: INFO: (17) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 7.132908ms) Aug 16 20:11:50.446: INFO: (17) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname1/proxy/: foo (200; 7.25493ms) Aug 16 20:11:50.446: INFO: (17) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 7.232929ms) Aug 16 20:11:50.446: INFO: (17) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 7.605089ms) Aug 16 20:11:50.446: INFO: (17) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 7.630275ms) Aug 16 20:11:50.446: INFO: (17) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname2/proxy/: tls qux (200; 7.99773ms) Aug 16 20:11:50.446: INFO: (17) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 7.306761ms) Aug 16 20:11:50.446: INFO: (17) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: ... (200; 8.234843ms) Aug 16 20:11:50.456: INFO: (18) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 8.394841ms) Aug 16 20:11:50.456: INFO: (18) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:462/proxy/: tls qux (200; 9.476593ms) Aug 16 20:11:50.456: INFO: (18) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 9.294357ms) Aug 16 20:11:50.456: INFO: (18) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:160/proxy/: foo (200; 9.783664ms) Aug 16 20:11:50.457: INFO: (18) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: test<... (200; 10.10955ms) Aug 16 20:11:50.461: INFO: (19) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4:162/proxy/: bar (200; 3.524192ms) Aug 16 20:11:50.462: INFO: (19) /api/v1/namespaces/proxy-5744/services/proxy-service-mwg29:portname2/proxy/: bar (200; 3.8777ms) Aug 16 20:11:50.462: INFO: (19) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:460/proxy/: tls baz (200; 4.167067ms) Aug 16 20:11:50.462: INFO: (19) /api/v1/namespaces/proxy-5744/pods/https:proxy-service-mwg29-fhkt4:443/proxy/: test<... (200; 5.13562ms) Aug 16 20:11:50.463: INFO: (19) /api/v1/namespaces/proxy-5744/pods/http:proxy-service-mwg29-fhkt4:1080/proxy/: ... (200; 5.141089ms) Aug 16 20:11:50.463: INFO: (19) /api/v1/namespaces/proxy-5744/pods/proxy-service-mwg29-fhkt4/proxy/: test (200; 5.138286ms) Aug 16 20:11:50.464: INFO: (19) /api/v1/namespaces/proxy-5744/services/https:proxy-service-mwg29:tlsportname1/proxy/: tls baz (200; 6.002989ms) STEP: deleting ReplicationController proxy-service-mwg29 in namespace proxy-5744, will wait for the garbage collector to delete the pods Aug 16 20:11:50.526: INFO: Deleting ReplicationController proxy-service-mwg29 took: 8.222306ms Aug 16 20:11:50.827: INFO: Terminating ReplicationController proxy-service-mwg29 pods took: 300.756464ms [AfterEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:11:53.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5744" for this suite. • [SLOW TEST:15.872 seconds] [sig-network] Proxy /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":35,"skipped":508,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:11:53.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Aug 16 20:11:53.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4207' Aug 16 20:11:55.519: INFO: stderr: "" Aug 16 20:11:55.520: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 16 20:11:55.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4207' Aug 16 20:11:56.736: INFO: stderr: "" Aug 16 20:11:56.736: INFO: stdout: "update-demo-nautilus-8sllh update-demo-nautilus-hskmx " Aug 16 20:11:56.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sllh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4207' Aug 16 20:11:57.977: INFO: stderr: "" Aug 16 20:11:57.977: INFO: stdout: "" Aug 16 20:11:57.977: INFO: update-demo-nautilus-8sllh is created but not running Aug 16 20:12:02.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4207' Aug 16 20:12:04.374: INFO: stderr: "" Aug 16 20:12:04.374: INFO: stdout: "update-demo-nautilus-8sllh update-demo-nautilus-hskmx " Aug 16 20:12:04.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sllh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4207' Aug 16 20:12:05.607: INFO: stderr: "" Aug 16 20:12:05.607: INFO: stdout: "true" Aug 16 20:12:05.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8sllh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4207' Aug 16 20:12:06.896: INFO: stderr: "" Aug 16 20:12:06.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 16 20:12:06.897: INFO: validating pod update-demo-nautilus-8sllh Aug 16 20:12:06.904: INFO: got data: { "image": "nautilus.jpg" } Aug 16 20:12:06.904: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 16 20:12:06.905: INFO: update-demo-nautilus-8sllh is verified up and running Aug 16 20:12:06.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hskmx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4207' Aug 16 20:12:08.191: INFO: stderr: "" Aug 16 20:12:08.191: INFO: stdout: "true" Aug 16 20:12:08.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hskmx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4207' Aug 16 20:12:09.453: INFO: stderr: "" Aug 16 20:12:09.453: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 16 20:12:09.453: INFO: validating pod update-demo-nautilus-hskmx Aug 16 20:12:09.459: INFO: got data: { "image": "nautilus.jpg" } Aug 16 20:12:09.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 16 20:12:09.460: INFO: update-demo-nautilus-hskmx is verified up and running STEP: using delete to clean up resources Aug 16 20:12:09.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4207' Aug 16 20:12:10.721: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 16 20:12:10.721: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 16 20:12:10.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4207' Aug 16 20:12:12.038: INFO: stderr: "No resources found in kubectl-4207 namespace.\n" Aug 16 20:12:12.038: INFO: stdout: "" Aug 16 20:12:12.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4207 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 16 20:12:13.348: INFO: stderr: "" Aug 16 20:12:13.349: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:12:13.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4207" for this suite. • [SLOW TEST:19.539 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should create and stop a replication controller [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":36,"skipped":509,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:12:13.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 16 20:12:13.445: INFO: Waiting up to 5m0s for pod "pod-8924d44c-fbdd-43fc-b1a4-931855328928" in namespace "emptydir-735" to be "success or failure" Aug 16 20:12:13.456: INFO: Pod "pod-8924d44c-fbdd-43fc-b1a4-931855328928": Phase="Pending", Reason="", readiness=false. Elapsed: 11.196473ms Aug 16 20:12:15.462: INFO: Pod "pod-8924d44c-fbdd-43fc-b1a4-931855328928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017226623s Aug 16 20:12:17.470: INFO: Pod "pod-8924d44c-fbdd-43fc-b1a4-931855328928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025364329s STEP: Saw pod success Aug 16 20:12:17.470: INFO: Pod "pod-8924d44c-fbdd-43fc-b1a4-931855328928" satisfied condition "success or failure" Aug 16 20:12:17.475: INFO: Trying to get logs from node jerma-worker pod pod-8924d44c-fbdd-43fc-b1a4-931855328928 container test-container: STEP: delete the pod Aug 16 20:12:17.665: INFO: Waiting for pod pod-8924d44c-fbdd-43fc-b1a4-931855328928 to disappear Aug 16 20:12:17.677: INFO: Pod pod-8924d44c-fbdd-43fc-b1a4-931855328928 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:12:17.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-735" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:12:17.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526 [It] should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 16 20:12:17.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7892' Aug 16 20:12:19.250: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 16 20:12:19.251: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Aug 16 20:12:19.356: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-zhx5r] Aug 16 20:12:19.357: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-zhx5r" in namespace "kubectl-7892" to be "running and ready" Aug 16 20:12:19.378: INFO: Pod "e2e-test-httpd-rc-zhx5r": Phase="Pending", Reason="", readiness=false. Elapsed: 21.455138ms Aug 16 20:12:21.466: INFO: Pod "e2e-test-httpd-rc-zhx5r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109396464s Aug 16 20:12:23.478: INFO: Pod "e2e-test-httpd-rc-zhx5r": Phase="Running", Reason="", readiness=true. Elapsed: 4.121163218s Aug 16 20:12:23.478: INFO: Pod "e2e-test-httpd-rc-zhx5r" satisfied condition "running and ready" Aug 16 20:12:23.479: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-zhx5r] Aug 16 20:12:23.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7892' Aug 16 20:12:24.837: INFO: stderr: "" Aug 16 20:12:24.837: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.125. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.125. Set the 'ServerName' directive globally to suppress this message\n[Sun Aug 16 20:12:22.270921 2020] [mpm_event:notice] [pid 1:tid 140261701520232] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Aug 16 20:12:22.270974 2020] [core:notice] [pid 1:tid 140261701520232] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531 Aug 16 20:12:24.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7892' Aug 16 20:12:26.209: INFO: stderr: "" Aug 16 20:12:26.210: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:12:26.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7892" for this suite. • [SLOW TEST:8.534 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 should create an rc from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":38,"skipped":549,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:12:26.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2747 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 16 20:12:26.843: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 16 20:12:53.444: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.126 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2747 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 20:12:53.444: INFO: >>> kubeConfig: /root/.kube/config I0816 20:12:53.503986 7 log.go:172] (0x40022de370) (0x40010565a0) Create stream I0816 20:12:53.504157 7 log.go:172] (0x40022de370) (0x40010565a0) Stream added, broadcasting: 1 I0816 20:12:53.507621 7 log.go:172] (0x40022de370) Reply frame received for 1 I0816 20:12:53.507834 7 log.go:172] (0x40022de370) (0x40010090e0) Create stream I0816 20:12:53.507984 7 log.go:172] (0x40022de370) (0x40010090e0) Stream added, broadcasting: 3 I0816 20:12:53.510064 7 log.go:172] (0x40022de370) Reply frame received for 3 I0816 20:12:53.510242 7 log.go:172] (0x40022de370) (0x400121e140) Create stream I0816 20:12:53.510332 7 log.go:172] (0x40022de370) (0x400121e140) Stream added, broadcasting: 5 I0816 20:12:53.512114 7 log.go:172] (0x40022de370) Reply frame received for 5 I0816 20:12:54.718984 7 log.go:172] (0x40022de370) Data frame received for 5 I0816 20:12:54.719162 7 log.go:172] (0x400121e140) (5) Data frame handling I0816 20:12:54.719357 7 log.go:172] (0x40022de370) Data frame received for 3 I0816 20:12:54.719570 7 log.go:172] (0x40010090e0) (3) Data frame handling I0816 20:12:54.719792 7 log.go:172] (0x40010090e0) (3) Data frame sent I0816 20:12:54.719964 7 log.go:172] (0x40022de370) Data frame received for 3 I0816 20:12:54.720112 7 log.go:172] (0x40010090e0) (3) Data frame handling I0816 20:12:54.721276 7 log.go:172] (0x40022de370) Data frame received for 1 I0816 20:12:54.721418 7 log.go:172] (0x40010565a0) (1) Data frame handling I0816 20:12:54.721565 7 log.go:172] (0x40010565a0) (1) Data frame sent I0816 20:12:54.721767 7 log.go:172] (0x40022de370) (0x40010565a0) Stream removed, broadcasting: 1 I0816 20:12:54.721929 7 log.go:172] (0x40022de370) Go away received I0816 20:12:54.722360 7 log.go:172] (0x40022de370) (0x40010565a0) Stream removed, broadcasting: 1 I0816 20:12:54.722499 7 log.go:172] (0x40022de370) (0x40010090e0) Stream removed, broadcasting: 3 I0816 20:12:54.722595 7 log.go:172] (0x40022de370) (0x400121e140) Stream removed, broadcasting: 5 Aug 16 20:12:54.722: INFO: Found all expected endpoints: [netserver-0] Aug 16 20:12:54.733: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.164 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2747 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 16 20:12:54.733: INFO: >>> kubeConfig: /root/.kube/config I0816 20:12:54.794494 7 log.go:172] (0x4003286630) (0x4001009680) Create stream I0816 20:12:54.794617 7 log.go:172] (0x4003286630) (0x4001009680) Stream added, broadcasting: 1 I0816 20:12:54.797741 7 log.go:172] (0x4003286630) Reply frame received for 1 I0816 20:12:54.797976 7 log.go:172] (0x4003286630) (0x40010099a0) Create stream I0816 20:12:54.798062 7 log.go:172] (0x4003286630) (0x40010099a0) Stream added, broadcasting: 3 I0816 20:12:54.799593 7 log.go:172] (0x4003286630) Reply frame received for 3 I0816 20:12:54.799736 7 log.go:172] (0x4003286630) (0x4001056780) Create stream I0816 20:12:54.799819 7 log.go:172] (0x4003286630) (0x4001056780) Stream added, broadcasting: 5 I0816 20:12:54.801370 7 log.go:172] (0x4003286630) Reply frame received for 5 I0816 20:12:55.854149 7 log.go:172] (0x4003286630) Data frame received for 3 I0816 20:12:55.854306 7 log.go:172] (0x40010099a0) (3) Data frame handling I0816 20:12:55.854393 7 log.go:172] (0x40010099a0) (3) Data frame sent I0816 20:12:55.854469 7 log.go:172] (0x4003286630) Data frame received for 3 I0816 20:12:55.854531 7 log.go:172] (0x40010099a0) (3) Data frame handling I0816 20:12:55.854617 7 log.go:172] (0x4003286630) Data frame received for 5 I0816 20:12:55.854749 7 log.go:172] (0x4001056780) (5) Data frame handling I0816 20:12:55.855484 7 log.go:172] (0x4003286630) Data frame received for 1 I0816 20:12:55.855598 7 log.go:172] (0x4001009680) (1) Data frame handling I0816 20:12:55.855706 7 log.go:172] (0x4001009680) (1) Data frame sent I0816 20:12:55.855826 7 log.go:172] (0x4003286630) (0x4001009680) Stream removed, broadcasting: 1 I0816 20:12:55.855956 7 log.go:172] (0x4003286630) Go away received I0816 20:12:55.856315 7 log.go:172] (0x4003286630) (0x4001009680) Stream removed, broadcasting: 1 I0816 20:12:55.856482 7 log.go:172] (0x4003286630) (0x40010099a0) Stream removed, broadcasting: 3 I0816 20:12:55.856622 7 log.go:172] (0x4003286630) (0x4001056780) Stream removed, broadcasting: 5 Aug 16 20:12:55.856: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:12:55.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2747" for this suite. • [SLOW TEST:29.643 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":560,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:12:55.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 16 20:13:07.151: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d79823ba-7730-4d28-9371-8a3868d35c50" Aug 16 20:13:07.151: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d79823ba-7730-4d28-9371-8a3868d35c50" in namespace "pods-6733" to be "terminated due to deadline exceeded" Aug 16 20:13:07.622: INFO: Pod "pod-update-activedeadlineseconds-d79823ba-7730-4d28-9371-8a3868d35c50": Phase="Running", Reason="", readiness=true. Elapsed: 470.153622ms Aug 16 20:13:09.646: INFO: Pod "pod-update-activedeadlineseconds-d79823ba-7730-4d28-9371-8a3868d35c50": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.494837694s Aug 16 20:13:09.647: INFO: Pod "pod-update-activedeadlineseconds-d79823ba-7730-4d28-9371-8a3868d35c50" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:13:09.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6733" for this suite. • [SLOW TEST:13.791 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":565,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:13:09.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:13:22.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1286" for this suite. • [SLOW TEST:12.414 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":41,"skipped":571,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:13:22.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 20:13:22.984: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 16 20:13:33.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6104 create -f -' Aug 16 20:13:39.743: INFO: stderr: "" Aug 16 20:13:39.743: INFO: stdout: "e2e-test-crd-publish-openapi-6693-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 16 20:13:39.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6104 delete e2e-test-crd-publish-openapi-6693-crds test-cr' Aug 16 20:13:40.985: INFO: stderr: "" Aug 16 20:13:40.985: INFO: stdout: "e2e-test-crd-publish-openapi-6693-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 16 20:13:40.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6104 apply -f -' Aug 16 20:13:42.590: INFO: stderr: "" Aug 16 20:13:42.590: INFO: stdout: "e2e-test-crd-publish-openapi-6693-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 16 20:13:42.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6104 delete e2e-test-crd-publish-openapi-6693-crds test-cr' Aug 16 20:13:43.851: INFO: stderr: "" Aug 16 20:13:43.851: INFO: stdout: "e2e-test-crd-publish-openapi-6693-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 16 20:13:43.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6693-crds' Aug 16 20:13:45.392: INFO: stderr: "" Aug 16 20:13:45.392: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6693-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:14:04.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6104" for this suite. • [SLOW TEST:42.877 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":42,"skipped":574,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:14:04.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:14:05.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2971" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":43,"skipped":589,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:14:05.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Aug 16 20:14:05.264: INFO: >>> kubeConfig: /root/.kube/config Aug 16 20:14:15.441: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:15:24.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-767" for this suite. • [SLOW TEST:79.119 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":44,"skipped":591,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:15:24.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3dff133f-8270-4033-8fdf-8efcc779a2ad STEP: Creating a pod to test consume configMaps Aug 16 20:15:25.434: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e" in namespace "projected-2722" to be "success or failure" Aug 16 20:15:25.491: INFO: Pod "pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e": Phase="Pending", Reason="", readiness=false. Elapsed: 56.729092ms Aug 16 20:15:27.498: INFO: Pod "pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063682157s Aug 16 20:15:30.080: INFO: Pod "pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.645623579s Aug 16 20:15:32.084: INFO: Pod "pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.650388928s STEP: Saw pod success Aug 16 20:15:32.085: INFO: Pod "pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e" satisfied condition "success or failure" Aug 16 20:15:32.088: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e container projected-configmap-volume-test: STEP: delete the pod Aug 16 20:15:32.115: INFO: Waiting for pod pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e to disappear Aug 16 20:15:32.161: INFO: Pod pod-projected-configmaps-2fe3b7fa-e5db-4015-9955-9816785aac5e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:15:32.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2722" for this suite. • [SLOW TEST:7.924 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":597,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:15:32.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 16 20:15:46.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:15:46.361: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 20:15:48.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:15:48.686: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 20:15:50.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:15:50.404: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 20:15:52.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:15:52.368: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 20:15:54.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:15:54.366: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 20:15:56.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:15:56.368: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 20:15:58.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:15:58.367: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 20:16:00.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:16:00.661: INFO: Pod pod-with-poststart-exec-hook still exists Aug 16 20:16:02.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 16 20:16:02.612: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:16:02.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6678" for this suite. • [SLOW TEST:30.522 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:16:02.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Aug 16 20:16:03.611: INFO: Waiting up to 5m0s for pod "client-containers-c62dfadf-a390-4a13-8702-629afe54e462" in namespace "containers-6867" to be "success or failure" Aug 16 20:16:03.960: INFO: Pod "client-containers-c62dfadf-a390-4a13-8702-629afe54e462": Phase="Pending", Reason="", readiness=false. Elapsed: 349.231867ms Aug 16 20:16:06.277: INFO: Pod "client-containers-c62dfadf-a390-4a13-8702-629afe54e462": Phase="Pending", Reason="", readiness=false. Elapsed: 2.666186943s Aug 16 20:16:08.601: INFO: Pod "client-containers-c62dfadf-a390-4a13-8702-629afe54e462": Phase="Pending", Reason="", readiness=false. Elapsed: 4.990164072s Aug 16 20:16:11.032: INFO: Pod "client-containers-c62dfadf-a390-4a13-8702-629afe54e462": Phase="Running", Reason="", readiness=true. Elapsed: 7.420816063s Aug 16 20:16:13.038: INFO: Pod "client-containers-c62dfadf-a390-4a13-8702-629afe54e462": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.427674511s STEP: Saw pod success Aug 16 20:16:13.039: INFO: Pod "client-containers-c62dfadf-a390-4a13-8702-629afe54e462" satisfied condition "success or failure" Aug 16 20:16:13.043: INFO: Trying to get logs from node jerma-worker pod client-containers-c62dfadf-a390-4a13-8702-629afe54e462 container test-container: STEP: delete the pod Aug 16 20:16:13.293: INFO: Waiting for pod client-containers-c62dfadf-a390-4a13-8702-629afe54e462 to disappear Aug 16 20:16:13.484: INFO: Pod client-containers-c62dfadf-a390-4a13-8702-629afe54e462 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:16:13.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6867" for this suite. • [SLOW TEST:10.848 seconds] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":682,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:16:13.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-xhw2 STEP: Creating a pod to test atomic-volume-subpath Aug 16 20:16:14.339: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xhw2" in namespace "subpath-1085" to be "success or failure" Aug 16 20:16:15.262: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Pending", Reason="", readiness=false. Elapsed: 923.00778ms Aug 16 20:16:17.511: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.171605773s Aug 16 20:16:20.104: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.764173196s Aug 16 20:16:22.528: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18862717s Aug 16 20:16:24.533: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193794157s Aug 16 20:16:26.547: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 12.207223477s Aug 16 20:16:28.773: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 14.433827765s Aug 16 20:16:31.024: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 16.684933102s Aug 16 20:16:33.029: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 18.690091939s Aug 16 20:16:35.034: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 20.694355432s Aug 16 20:16:37.038: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 22.698346567s Aug 16 20:16:39.043: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 24.703236075s Aug 16 20:16:41.049: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 26.709451671s Aug 16 20:16:43.054: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 28.714847848s Aug 16 20:16:45.059: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Running", Reason="", readiness=true. Elapsed: 30.719304485s Aug 16 20:16:47.066: INFO: Pod "pod-subpath-test-configmap-xhw2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.726130424s STEP: Saw pod success Aug 16 20:16:47.066: INFO: Pod "pod-subpath-test-configmap-xhw2" satisfied condition "success or failure" Aug 16 20:16:47.355: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-xhw2 container test-container-subpath-configmap-xhw2: STEP: delete the pod Aug 16 20:16:47.411: INFO: Waiting for pod pod-subpath-test-configmap-xhw2 to disappear Aug 16 20:16:47.713: INFO: Pod pod-subpath-test-configmap-xhw2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-xhw2 Aug 16 20:16:47.714: INFO: Deleting pod "pod-subpath-test-configmap-xhw2" in namespace "subpath-1085" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:16:47.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1085" for this suite. • [SLOW TEST:34.179 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":48,"skipped":683,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:16:47.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:17:02.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7816" for this suite. • [SLOW TEST:15.724 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":49,"skipped":688,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:17:03.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 20:17:06.023: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 20:17:08.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205826, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205826, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205826, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205825, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 20:17:10.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205826, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205826, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205826, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733205825, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 20:17:13.432: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:17:23.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8144" for this suite. STEP: Destroying namespace "webhook-8144-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.615 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":50,"skipped":694,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:17:24.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3378.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3378.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 16 20:17:44.961: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:44.968: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:44.973: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:44.976: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:44.984: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:44.987: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:44.990: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:44.993: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:44.999: INFO: Lookups using dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local] Aug 16 20:17:50.010: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:50.055: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:50.058: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:50.061: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:50.067: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:50.070: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:50.072: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:50.075: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:50.080: INFO: Lookups using dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local] Aug 16 20:17:55.005: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:55.009: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:55.013: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:55.016: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:55.026: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:55.028: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:55.031: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:55.034: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:17:55.041: INFO: Lookups using dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local] Aug 16 20:18:00.010: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:00.015: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:00.017: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:00.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:00.033: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:00.035: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:00.038: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:00.040: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:00.046: INFO: Lookups using dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local] Aug 16 20:18:05.005: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:05.009: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:05.012: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:05.016: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:05.025: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:05.028: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:05.030: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:05.033: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local from pod dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5: the server could not find the requested resource (get pods dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5) Aug 16 20:18:05.037: INFO: Lookups using dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3378.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3378.svc.cluster.local jessie_udp@dns-test-service-2.dns-3378.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3378.svc.cluster.local] Aug 16 20:18:10.125: INFO: DNS probes using dns-3378/dns-test-dfcd93fe-d4b3-4131-8f6b-157f1c35b4f5 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:18:10.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3378" for this suite. • [SLOW TEST:46.340 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":51,"skipped":706,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:18:10.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-711bcfed-7e26-4ec2-ab84-a026badd51fc STEP: Creating a pod to test consume secrets Aug 16 20:18:11.102: INFO: Waiting up to 5m0s for pod "pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5" in namespace "secrets-8763" to be "success or failure" Aug 16 20:18:11.118: INFO: Pod "pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.649619ms Aug 16 20:18:13.203: INFO: Pod "pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100199458s Aug 16 20:18:16.221: INFO: Pod "pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.118478339s Aug 16 20:18:19.068: INFO: Pod "pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.966099094s Aug 16 20:18:21.281: INFO: Pod "pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.178567401s Aug 16 20:18:23.377: INFO: Pod "pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.274355809s STEP: Saw pod success Aug 16 20:18:23.377: INFO: Pod "pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5" satisfied condition "success or failure" Aug 16 20:18:23.844: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5 container secret-volume-test: STEP: delete the pod Aug 16 20:18:24.199: INFO: Waiting for pod pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5 to disappear Aug 16 20:18:24.486: INFO: Pod pod-secrets-f22f77f8-24cd-45e1-bc86-2c48cabe92f5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:18:24.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8763" for this suite. • [SLOW TEST:14.094 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":728,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:18:24.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 STEP: creating an pod Aug 16 20:18:25.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-2501 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 16 20:18:26.563: INFO: stderr: "" Aug 16 20:18:26.563: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Aug 16 20:18:26.564: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 16 20:18:26.564: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2501" to be "running and ready, or succeeded" Aug 16 20:18:26.770: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 205.92071ms Aug 16 20:18:28.777: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212576184s Aug 16 20:18:31.597: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.033025424s Aug 16 20:18:34.470: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.906114478s Aug 16 20:18:36.668: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.10362146s Aug 16 20:18:38.765: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 12.200576292s Aug 16 20:18:38.765: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 16 20:18:38.765: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 16 20:18:38.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2501' Aug 16 20:18:40.228: INFO: stderr: "" Aug 16 20:18:40.228: INFO: stdout: "I0816 20:18:35.810162 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/cgn 223\nI0816 20:18:36.010325 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/q2f 501\nI0816 20:18:36.210416 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/2lmv 518\nI0816 20:18:36.410375 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/xnb8 457\nI0816 20:18:36.610341 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/dql 553\nI0816 20:18:36.810417 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/zvjs 304\nI0816 20:18:37.010368 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/pfxj 441\nI0816 20:18:37.210327 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/dzjh 571\nI0816 20:18:37.410303 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/bhl 468\nI0816 20:18:37.610412 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/jr7 281\nI0816 20:18:37.810390 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/snrq 444\nI0816 20:18:38.010337 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/cs8 523\nI0816 20:18:38.210393 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/gwzd 450\nI0816 20:18:38.410417 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/t6r 380\nI0816 20:18:38.610319 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/g7b 481\nI0816 20:18:38.810380 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/vz6 478\nI0816 20:18:39.010312 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/lmn4 544\nI0816 20:18:39.210314 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/vw7 517\nI0816 20:18:39.410341 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/4vx 546\nI0816 20:18:39.611966 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/kmm 243\nI0816 20:18:39.810312 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/gwjf 494\nI0816 20:18:40.010339 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/jzq 549\nI0816 20:18:40.210282 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/lvfh 456\n" STEP: limiting log lines Aug 16 20:18:40.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2501 --tail=1' Aug 16 20:18:42.382: INFO: stderr: "" Aug 16 20:18:42.383: INFO: stdout: "I0816 20:18:42.010299 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/ns/pods/dk2t 461\nI0816 20:18:42.210281 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/default/pods/smn 213\n" Aug 16 20:18:42.383: INFO: got output "I0816 20:18:42.010299 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/ns/pods/dk2t 461\nI0816 20:18:42.210281 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/default/pods/smn 213\n" Aug 16 20:18:42.385: FAIL: Expected : 2 to equal : 1 [AfterEach] Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Aug 16 20:18:42.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2501' Aug 16 20:18:48.764: INFO: stderr: "" Aug 16 20:18:48.764: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "kubectl-2501". STEP: Found 5 events. Aug 16 20:18:48.834: INFO: At 2020-08-16 20:18:26 +0000 UTC - event for logs-generator: {default-scheduler } Scheduled: Successfully assigned kubectl-2501/logs-generator to jerma-worker2 Aug 16 20:18:48.835: INFO: At 2020-08-16 20:18:29 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Aug 16 20:18:48.835: INFO: At 2020-08-16 20:18:35 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Created: Created container logs-generator Aug 16 20:18:48.835: INFO: At 2020-08-16 20:18:36 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Started: Started container logs-generator Aug 16 20:18:48.835: INFO: At 2020-08-16 20:18:43 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Killing: Stopping container logs-generator Aug 16 20:18:48.874: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:18:48.875: INFO: Aug 16 20:18:48.882: INFO: Logging node info for node jerma-control-plane Aug 16 20:18:48.886: INFO: Node Info: &Node{ObjectMeta:{jerma-control-plane /api/v1/nodes/jerma-control-plane 9807fae8-7165-458f-b7b4-e66f5ffc1e8c 487385 0 2020-08-15 09:37:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-16 20:13:50 +0000 UTC,LastTransitionTime:2020-08-15 09:37:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-16 20:13:50 +0000 UTC,LastTransitionTime:2020-08-15 09:37:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-16 20:13:50 +0000 UTC,LastTransitionTime:2020-08-15 09:37:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-16 20:13:50 +0000 UTC,LastTransitionTime:2020-08-15 09:37:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.10,},NodeAddress{Type:Hostname,Address:jerma-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e52c45bc589d48d995e8fd79ad5bf250,SystemUUID:b981bdc7-d264-48ef-ab5e-3308e23aaf13,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 16 20:18:48.896: INFO: Logging kubelet events for node jerma-control-plane Aug 16 20:18:48.900: INFO: Logging pods the kubelet thinks is on node jerma-control-plane Aug 16 20:18:48.932: INFO: kube-controller-manager-jerma-control-plane started at 2020-08-15 09:37:10 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.932: INFO: Container kube-controller-manager ready: true, restart count 0 Aug 16 20:18:48.932: INFO: kube-scheduler-jerma-control-plane started at 2020-08-15 09:37:10 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.932: INFO: Container kube-scheduler ready: true, restart count 0 Aug 16 20:18:48.932: INFO: kube-proxy-hmb6l started at 2020-08-15 09:37:25 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.932: INFO: Container kube-proxy ready: true, restart count 0 Aug 16 20:18:48.932: INFO: kindnet-j88mt started at 2020-08-15 09:37:25 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.932: INFO: Container kindnet-cni ready: true, restart count 0 Aug 16 20:18:48.932: INFO: local-path-provisioner-58f6947c7-p2cqw started at 2020-08-15 09:37:43 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.932: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 16 20:18:48.932: INFO: coredns-6955765f44-bvrm4 started at 2020-08-15 09:37:43 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.932: INFO: Container coredns ready: true, restart count 0 Aug 16 20:18:48.933: INFO: etcd-jerma-control-plane started at 2020-08-15 09:37:10 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.933: INFO: Container etcd ready: true, restart count 0 Aug 16 20:18:48.933: INFO: kube-apiserver-jerma-control-plane started at 2020-08-15 09:37:10 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.933: INFO: Container kube-apiserver ready: true, restart count 0 Aug 16 20:18:48.933: INFO: coredns-6955765f44-db8rh started at 2020-08-15 09:37:45 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:48.933: INFO: Container coredns ready: true, restart count 0 W0816 20:18:49.131552 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 16 20:18:49.255: INFO: Latency metrics for node jerma-control-plane Aug 16 20:18:49.255: INFO: Logging node info for node jerma-worker Aug 16 20:18:49.260: INFO: Node Info: &Node{ObjectMeta:{jerma-worker /api/v1/nodes/jerma-worker 90e2faec-9376-474f-8ba7-1ed2afa852de 488539 0 2020-08-15 09:37:46 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-16 20:17:36 +0000 UTC,LastTransitionTime:2020-08-15 09:37:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-16 20:17:36 +0000 UTC,LastTransitionTime:2020-08-15 09:37:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-16 20:17:36 +0000 UTC,LastTransitionTime:2020-08-15 09:37:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-16 20:17:36 +0000 UTC,LastTransitionTime:2020-08-15 09:38:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:jerma-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dab185c7ff69420787de20ae0dd7260c,SystemUUID:3dd3aef9-8386-4ead-ab50-5cb1f1b626a9,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 16 20:18:49.263: INFO: Logging kubelet events for node jerma-worker Aug 16 20:18:49.266: INFO: Logging pods the kubelet thinks is on node jerma-worker Aug 16 20:18:49.286: INFO: kube-proxy-lgd85 started at 2020-08-15 09:37:48 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:49.286: INFO: Container kube-proxy ready: true, restart count 0 Aug 16 20:18:49.286: INFO: kindnet-tfrcx started at 2020-08-15 09:37:48 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:49.286: INFO: Container kindnet-cni ready: true, restart count 0 W0816 20:18:49.471839 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 16 20:18:50.205: INFO: Latency metrics for node jerma-worker Aug 16 20:18:50.205: INFO: Logging node info for node jerma-worker2 Aug 16 20:18:50.316: INFO: Node Info: &Node{ObjectMeta:{jerma-worker2 /api/v1/nodes/jerma-worker2 0aee0803-8d91-4dce-8e41-bab7d365873c 487674 0 2020-08-15 09:37:46 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-16 20:14:56 +0000 UTC,LastTransitionTime:2020-08-15 09:37:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-16 20:14:56 +0000 UTC,LastTransitionTime:2020-08-15 09:37:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-16 20:14:56 +0000 UTC,LastTransitionTime:2020-08-15 09:37:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-16 20:14:56 +0000 UTC,LastTransitionTime:2020-08-15 09:38:17 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:jerma-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a7187009cafb40a1b2569a5b2f3ef752,SystemUUID:48129e97-8c3c-4952-92e4-16b347778cb9,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.17.5,KubeProxyVersion:v1.17.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.5],SizeBytes:144466737,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.5],SizeBytes:132100222,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.5],SizeBytes:131244355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.5],SizeBytes:111947057,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:16222606,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 16 20:18:50.320: INFO: Logging kubelet events for node jerma-worker2 Aug 16 20:18:50.383: INFO: Logging pods the kubelet thinks is on node jerma-worker2 Aug 16 20:18:50.396: INFO: kube-proxy-ckhpn started at 2020-08-15 09:37:48 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:50.396: INFO: Container kube-proxy ready: true, restart count 0 Aug 16 20:18:50.396: INFO: kindnet-gxck9 started at 2020-08-15 09:37:48 +0000 UTC (0+1 container statuses recorded) Aug 16 20:18:50.396: INFO: Container kindnet-cni ready: true, restart count 0 W0816 20:18:50.403089 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 16 20:18:50.518: INFO: Latency metrics for node jerma-worker2 Aug 16 20:18:50.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2501" for this suite. • Failure [26.031 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354 should be able to retrieve and filter logs [Conformance] [It] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 20:18:42.385: Expected : 2 to equal : 1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":52,"skipped":732,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:18:50.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-3f5d4a1a-cb49-4cc0-b026-00cbd9c82564 in namespace container-probe-6202 Aug 16 20:19:01.531: INFO: Started pod busybox-3f5d4a1a-cb49-4cc0-b026-00cbd9c82564 in namespace container-probe-6202 STEP: checking the pod's current state and verifying that restartCount is present Aug 16 20:19:02.038: INFO: Initial restart count of pod busybox-3f5d4a1a-cb49-4cc0-b026-00cbd9c82564 is 0 Aug 16 20:19:50.939: INFO: Restart count of pod container-probe-6202/busybox-3f5d4a1a-cb49-4cc0-b026-00cbd9c82564 is now 1 (48.90084613s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:19:51.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6202" for this suite. • [SLOW TEST:61.371 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":747,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:19:51.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:19:53.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0" in namespace "projected-5482" to be "success or failure" Aug 16 20:19:53.297: INFO: Pod "downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 236.453279ms Aug 16 20:19:55.304: INFO: Pod "downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243146496s Aug 16 20:19:57.434: INFO: Pod "downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37307958s Aug 16 20:19:59.585: INFO: Pod "downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0": Phase="Running", Reason="", readiness=true. Elapsed: 6.523859603s Aug 16 20:20:01.617: INFO: Pod "downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.556372416s STEP: Saw pod success Aug 16 20:20:01.618: INFO: Pod "downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0" satisfied condition "success or failure" Aug 16 20:20:01.673: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0 container client-container: STEP: delete the pod Aug 16 20:20:01.992: INFO: Waiting for pod downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0 to disappear Aug 16 20:20:02.021: INFO: Pod downwardapi-volume-5ece96a6-b52c-4cec-8947-10db7f56b7b0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:20:02.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5482" for this suite. • [SLOW TEST:10.303 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":750,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:20:02.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Aug 16 20:20:04.674: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix529891077/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:20:05.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6726" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":55,"skipped":767,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:20:05.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-3fe9a5d0-7548-4165-96dd-b3122321959f [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:20:06.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6354" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":56,"skipped":771,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:20:06.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 16 20:20:07.444: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3906 /api/v1/namespaces/watch-3906/configmaps/e2e-watch-test-resource-version 1a0e46f8-4ff6-4fb5-b8a3-d4a56fb66942 489338 0 2020-08-16 20:20:07 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 16 20:20:07.446: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3906 /api/v1/namespaces/watch-3906/configmaps/e2e-watch-test-resource-version 1a0e46f8-4ff6-4fb5-b8a3-d4a56fb66942 489339 0 2020-08-16 20:20:07 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:20:07.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3906" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":57,"skipped":796,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:20:07.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 20:20:11.477: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 20:20:13.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206010, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 20:20:15.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206010, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 20:20:18.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206011, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206010, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 20:20:21.021: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:20:22.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-45" for this suite. STEP: Destroying namespace "webhook-45-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.225 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":58,"skipped":799,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:20:24.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 20:20:32.262: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 20:20:34.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206032, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206032, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206032, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206032, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 20:20:36.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206032, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206032, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206032, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733206032, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 20:20:39.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:20:40.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1365" for this suite. STEP: Destroying namespace "webhook-1365-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.619 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":59,"skipped":859,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:20:40.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-lmlg STEP: Creating a pod to test atomic-volume-subpath Aug 16 20:20:41.010: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lmlg" in namespace "subpath-409" to be "success or failure" Aug 16 20:20:41.068: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Pending", Reason="", readiness=false. Elapsed: 58.563112ms Aug 16 20:20:43.184: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174375623s Aug 16 20:20:45.359: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 4.348775952s Aug 16 20:20:47.364: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 6.353723265s Aug 16 20:20:49.370: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 8.359861542s Aug 16 20:20:51.513: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 10.503415526s Aug 16 20:20:53.521: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 12.510816676s Aug 16 20:20:55.626: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 14.616289928s Aug 16 20:20:57.633: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 16.622799729s Aug 16 20:20:59.723: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 18.713359949s Aug 16 20:21:01.729: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 20.718844904s Aug 16 20:21:03.883: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Running", Reason="", readiness=true. Elapsed: 22.87344861s Aug 16 20:21:05.966: INFO: Pod "pod-subpath-test-projected-lmlg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.955751768s STEP: Saw pod success Aug 16 20:21:05.966: INFO: Pod "pod-subpath-test-projected-lmlg" satisfied condition "success or failure" Aug 16 20:21:05.979: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-lmlg container test-container-subpath-projected-lmlg: STEP: delete the pod Aug 16 20:21:06.297: INFO: Waiting for pod pod-subpath-test-projected-lmlg to disappear Aug 16 20:21:06.506: INFO: Pod pod-subpath-test-projected-lmlg no longer exists STEP: Deleting pod pod-subpath-test-projected-lmlg Aug 16 20:21:06.507: INFO: Deleting pod "pod-subpath-test-projected-lmlg" in namespace "subpath-409" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:21:06.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-409" for this suite. • [SLOW TEST:25.913 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":60,"skipped":874,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:21:06.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8429 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8429 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8429 Aug 16 20:21:07.695: INFO: Found 0 stateful pods, waiting for 1 Aug 16 20:21:17.706: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 16 20:21:17.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 20:21:21.069: INFO: stderr: "I0816 20:21:19.152294 712 log.go:172] (0x4000a9e000) (0x4000a42000) Create stream\nI0816 20:21:19.154998 712 log.go:172] (0x4000a9e000) (0x4000a42000) Stream added, broadcasting: 1\nI0816 20:21:19.166644 712 log.go:172] (0x4000a9e000) Reply frame received for 1\nI0816 20:21:19.167707 712 log.go:172] (0x4000a9e000) (0x4000a420a0) Create stream\nI0816 20:21:19.167800 712 log.go:172] (0x4000a9e000) (0x4000a420a0) Stream added, broadcasting: 3\nI0816 20:21:19.170110 712 log.go:172] (0x4000a9e000) Reply frame received for 3\nI0816 20:21:19.170566 712 log.go:172] (0x4000a9e000) (0x4000a421e0) Create stream\nI0816 20:21:19.170657 712 log.go:172] (0x4000a9e000) (0x4000a421e0) Stream added, broadcasting: 5\nI0816 20:21:19.172028 712 log.go:172] (0x4000a9e000) Reply frame received for 5\nI0816 20:21:19.246641 712 log.go:172] (0x4000a9e000) Data frame received for 5\nI0816 20:21:19.246920 712 log.go:172] (0x4000a421e0) (5) Data frame handling\nI0816 20:21:19.247525 712 log.go:172] (0x4000a421e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 20:21:21.046423 712 log.go:172] (0x4000a9e000) Data frame received for 3\nI0816 20:21:21.046905 712 log.go:172] (0x4000a420a0) (3) Data frame handling\nI0816 20:21:21.047440 712 log.go:172] (0x4000a420a0) (3) Data frame sent\nI0816 20:21:21.047598 712 log.go:172] (0x4000a9e000) Data frame received for 3\nI0816 20:21:21.047733 712 log.go:172] (0x4000a420a0) (3) Data frame handling\nI0816 20:21:21.048223 712 log.go:172] (0x4000a9e000) Data frame received for 5\nI0816 20:21:21.048370 712 log.go:172] (0x4000a421e0) (5) Data frame handling\nI0816 20:21:21.048604 712 log.go:172] (0x4000a9e000) Data frame received for 1\nI0816 20:21:21.048860 712 log.go:172] (0x4000a42000) (1) Data frame handling\nI0816 20:21:21.049001 712 log.go:172] (0x4000a42000) (1) Data frame sent\nI0816 20:21:21.050408 712 log.go:172] (0x4000a9e000) (0x4000a42000) Stream removed, broadcasting: 1\nI0816 20:21:21.053047 712 log.go:172] (0x4000a9e000) Go away received\nI0816 20:21:21.056923 712 log.go:172] (0x4000a9e000) (0x4000a42000) Stream removed, broadcasting: 1\nI0816 20:21:21.057194 712 log.go:172] (0x4000a9e000) (0x4000a420a0) Stream removed, broadcasting: 3\nI0816 20:21:21.057365 712 log.go:172] (0x4000a9e000) (0x4000a421e0) Stream removed, broadcasting: 5\n" Aug 16 20:21:21.070: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 20:21:21.070: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 20:21:21.101: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 16 20:21:31.352: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 16 20:21:31.353: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 20:21:31.398: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999972237s Aug 16 20:21:32.550: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994163977s Aug 16 20:21:33.753: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.842369182s Aug 16 20:21:35.897: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.639999869s Aug 16 20:21:37.164: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.495996376s Aug 16 20:21:38.171: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.229077068s Aug 16 20:21:39.179: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.221219063s Aug 16 20:21:40.187: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.213669257s Aug 16 20:21:41.194: INFO: Verifying statefulset ss doesn't scale past 1 for another 206.011593ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8429 Aug 16 20:21:42.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:21:43.940: INFO: stderr: "I0816 20:21:43.819892 735 log.go:172] (0x4000bacb00) (0x4000b480a0) Create stream\nI0816 20:21:43.825170 735 log.go:172] (0x4000bacb00) (0x4000b480a0) Stream added, broadcasting: 1\nI0816 20:21:43.838944 735 log.go:172] (0x4000bacb00) Reply frame received for 1\nI0816 20:21:43.839854 735 log.go:172] (0x4000bacb00) (0x40008a06e0) Create stream\nI0816 20:21:43.839941 735 log.go:172] (0x4000bacb00) (0x40008a06e0) Stream added, broadcasting: 3\nI0816 20:21:43.841669 735 log.go:172] (0x4000bacb00) Reply frame received for 3\nI0816 20:21:43.842004 735 log.go:172] (0x4000bacb00) (0x4000b48140) Create stream\nI0816 20:21:43.842071 735 log.go:172] (0x4000bacb00) (0x4000b48140) Stream added, broadcasting: 5\nI0816 20:21:43.843522 735 log.go:172] (0x4000bacb00) Reply frame received for 5\nI0816 20:21:43.919631 735 log.go:172] (0x4000bacb00) Data frame received for 3\nI0816 20:21:43.920137 735 log.go:172] (0x4000bacb00) Data frame received for 1\nI0816 20:21:43.920291 735 log.go:172] (0x40008a06e0) (3) Data frame handling\nI0816 20:21:43.920601 735 log.go:172] (0x4000bacb00) Data frame received for 5\nI0816 20:21:43.920837 735 log.go:172] (0x4000b48140) (5) Data frame handling\nI0816 20:21:43.921071 735 log.go:172] (0x4000b480a0) (1) Data frame handling\nI0816 20:21:43.921750 735 log.go:172] (0x40008a06e0) (3) Data frame sent\nI0816 20:21:43.922001 735 log.go:172] (0x4000bacb00) Data frame received for 3\nI0816 20:21:43.922099 735 log.go:172] (0x40008a06e0) (3) Data frame handling\nI0816 20:21:43.922679 735 log.go:172] (0x4000b480a0) (1) Data frame sent\nI0816 20:21:43.922790 735 log.go:172] (0x4000b48140) (5) Data frame sent\nI0816 20:21:43.922909 735 log.go:172] (0x4000bacb00) Data frame received for 5\nI0816 20:21:43.922981 735 log.go:172] (0x4000b48140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 20:21:43.927047 735 log.go:172] (0x4000bacb00) (0x4000b480a0) Stream removed, broadcasting: 1\nI0816 20:21:43.927686 735 log.go:172] (0x4000bacb00) Go away received\nI0816 20:21:43.930727 735 log.go:172] (0x4000bacb00) (0x4000b480a0) Stream removed, broadcasting: 1\nI0816 20:21:43.931094 735 log.go:172] (0x4000bacb00) (0x40008a06e0) Stream removed, broadcasting: 3\nI0816 20:21:43.931354 735 log.go:172] (0x4000bacb00) (0x4000b48140) Stream removed, broadcasting: 5\n" Aug 16 20:21:43.941: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 20:21:43.941: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 20:21:43.947: INFO: Found 1 stateful pods, waiting for 3 Aug 16 20:21:54.311: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:21:54.311: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:21:54.311: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 16 20:22:04.084: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:22:04.084: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:22:04.084: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 16 20:22:04.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 20:22:05.834: INFO: stderr: "I0816 20:22:05.548051 757 log.go:172] (0x400071a000) (0x40007f3d60) Create stream\nI0816 20:22:05.553404 757 log.go:172] (0x400071a000) (0x40007f3d60) Stream added, broadcasting: 1\nI0816 20:22:05.569606 757 log.go:172] (0x400071a000) Reply frame received for 1\nI0816 20:22:05.570661 757 log.go:172] (0x400071a000) (0x40007f3e00) Create stream\nI0816 20:22:05.570771 757 log.go:172] (0x400071a000) (0x40007f3e00) Stream added, broadcasting: 3\nI0816 20:22:05.572501 757 log.go:172] (0x400071a000) Reply frame received for 3\nI0816 20:22:05.572886 757 log.go:172] (0x400071a000) (0x4000712140) Create stream\nI0816 20:22:05.572962 757 log.go:172] (0x400071a000) (0x4000712140) Stream added, broadcasting: 5\nI0816 20:22:05.574120 757 log.go:172] (0x400071a000) Reply frame received for 5\nI0816 20:22:05.622034 757 log.go:172] (0x400071a000) Data frame received for 5\nI0816 20:22:05.622339 757 log.go:172] (0x4000712140) (5) Data frame handling\nI0816 20:22:05.623019 757 log.go:172] (0x4000712140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 20:22:05.814112 757 log.go:172] (0x400071a000) Data frame received for 3\nI0816 20:22:05.814431 757 log.go:172] (0x40007f3e00) (3) Data frame handling\nI0816 20:22:05.814686 757 log.go:172] (0x400071a000) Data frame received for 5\nI0816 20:22:05.814989 757 log.go:172] (0x4000712140) (5) Data frame handling\nI0816 20:22:05.815281 757 log.go:172] (0x40007f3e00) (3) Data frame sent\nI0816 20:22:05.815463 757 log.go:172] (0x400071a000) Data frame received for 3\nI0816 20:22:05.815620 757 log.go:172] (0x40007f3e00) (3) Data frame handling\nI0816 20:22:05.816371 757 log.go:172] (0x400071a000) Data frame received for 1\nI0816 20:22:05.816571 757 log.go:172] (0x40007f3d60) (1) Data frame handling\nI0816 20:22:05.816856 757 log.go:172] (0x40007f3d60) (1) Data frame sent\nI0816 20:22:05.819142 757 log.go:172] (0x400071a000) (0x40007f3d60) Stream removed, broadcasting: 1\nI0816 20:22:05.821499 757 log.go:172] (0x400071a000) Go away received\nI0816 20:22:05.824595 757 log.go:172] (0x400071a000) (0x40007f3d60) Stream removed, broadcasting: 1\nI0816 20:22:05.825194 757 log.go:172] (0x400071a000) (0x40007f3e00) Stream removed, broadcasting: 3\nI0816 20:22:05.825483 757 log.go:172] (0x400071a000) (0x4000712140) Stream removed, broadcasting: 5\n" Aug 16 20:22:05.835: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 20:22:05.835: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 20:22:05.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 20:22:07.653: INFO: stderr: "I0816 20:22:07.456619 779 log.go:172] (0x4000a52160) (0x400073c000) Create stream\nI0816 20:22:07.459216 779 log.go:172] (0x4000a52160) (0x400073c000) Stream added, broadcasting: 1\nI0816 20:22:07.468130 779 log.go:172] (0x4000a52160) Reply frame received for 1\nI0816 20:22:07.468657 779 log.go:172] (0x4000a52160) (0x40006a20a0) Create stream\nI0816 20:22:07.468786 779 log.go:172] (0x4000a52160) (0x40006a20a0) Stream added, broadcasting: 3\nI0816 20:22:07.470209 779 log.go:172] (0x4000a52160) Reply frame received for 3\nI0816 20:22:07.470512 779 log.go:172] (0x4000a52160) (0x4000780000) Create stream\nI0816 20:22:07.470577 779 log.go:172] (0x4000a52160) (0x4000780000) Stream added, broadcasting: 5\nI0816 20:22:07.472118 779 log.go:172] (0x4000a52160) Reply frame received for 5\nI0816 20:22:07.536716 779 log.go:172] (0x4000a52160) Data frame received for 5\nI0816 20:22:07.537018 779 log.go:172] (0x4000780000) (5) Data frame handling\nI0816 20:22:07.537497 779 log.go:172] (0x4000780000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 20:22:07.635833 779 log.go:172] (0x4000a52160) Data frame received for 3\nI0816 20:22:07.636031 779 log.go:172] (0x40006a20a0) (3) Data frame handling\nI0816 20:22:07.636149 779 log.go:172] (0x4000a52160) Data frame received for 5\nI0816 20:22:07.636321 779 log.go:172] (0x4000780000) (5) Data frame handling\nI0816 20:22:07.636395 779 log.go:172] (0x40006a20a0) (3) Data frame sent\nI0816 20:22:07.636494 779 log.go:172] (0x4000a52160) Data frame received for 3\nI0816 20:22:07.636580 779 log.go:172] (0x40006a20a0) (3) Data frame handling\nI0816 20:22:07.637749 779 log.go:172] (0x4000a52160) Data frame received for 1\nI0816 20:22:07.637827 779 log.go:172] (0x400073c000) (1) Data frame handling\nI0816 20:22:07.637910 779 log.go:172] (0x400073c000) (1) Data frame sent\nI0816 20:22:07.639015 779 log.go:172] (0x4000a52160) (0x400073c000) Stream removed, broadcasting: 1\nI0816 20:22:07.642252 779 log.go:172] (0x4000a52160) Go away received\nI0816 20:22:07.645412 779 log.go:172] (0x4000a52160) (0x400073c000) Stream removed, broadcasting: 1\nI0816 20:22:07.645929 779 log.go:172] (0x4000a52160) (0x40006a20a0) Stream removed, broadcasting: 3\nI0816 20:22:07.646087 779 log.go:172] (0x4000a52160) (0x4000780000) Stream removed, broadcasting: 5\n" Aug 16 20:22:07.654: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 20:22:07.654: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 20:22:07.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 20:22:09.397: INFO: stderr: "I0816 20:22:09.054048 801 log.go:172] (0x40009ee000) (0x40007e99a0) Create stream\nI0816 20:22:09.056444 801 log.go:172] (0x40009ee000) (0x40007e99a0) Stream added, broadcasting: 1\nI0816 20:22:09.066494 801 log.go:172] (0x40009ee000) Reply frame received for 1\nI0816 20:22:09.067226 801 log.go:172] (0x40009ee000) (0x4000964000) Create stream\nI0816 20:22:09.067311 801 log.go:172] (0x40009ee000) (0x4000964000) Stream added, broadcasting: 3\nI0816 20:22:09.069053 801 log.go:172] (0x40009ee000) Reply frame received for 3\nI0816 20:22:09.069457 801 log.go:172] (0x40009ee000) (0x40005e0000) Create stream\nI0816 20:22:09.069539 801 log.go:172] (0x40009ee000) (0x40005e0000) Stream added, broadcasting: 5\nI0816 20:22:09.070714 801 log.go:172] (0x40009ee000) Reply frame received for 5\nI0816 20:22:09.135053 801 log.go:172] (0x40009ee000) Data frame received for 5\nI0816 20:22:09.135311 801 log.go:172] (0x40005e0000) (5) Data frame handling\nI0816 20:22:09.135691 801 log.go:172] (0x40005e0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 20:22:09.378559 801 log.go:172] (0x40009ee000) Data frame received for 3\nI0816 20:22:09.378757 801 log.go:172] (0x4000964000) (3) Data frame handling\nI0816 20:22:09.378874 801 log.go:172] (0x40009ee000) Data frame received for 5\nI0816 20:22:09.378997 801 log.go:172] (0x40005e0000) (5) Data frame handling\nI0816 20:22:09.379172 801 log.go:172] (0x4000964000) (3) Data frame sent\nI0816 20:22:09.379341 801 log.go:172] (0x40009ee000) Data frame received for 3\nI0816 20:22:09.379487 801 log.go:172] (0x4000964000) (3) Data frame handling\nI0816 20:22:09.380055 801 log.go:172] (0x40009ee000) Data frame received for 1\nI0816 20:22:09.380164 801 log.go:172] (0x40007e99a0) (1) Data frame handling\nI0816 20:22:09.380275 801 log.go:172] (0x40007e99a0) (1) Data frame sent\nI0816 20:22:09.381058 801 log.go:172] (0x40009ee000) (0x40007e99a0) Stream removed, broadcasting: 1\nI0816 20:22:09.383814 801 log.go:172] (0x40009ee000) Go away received\nI0816 20:22:09.386171 801 log.go:172] (0x40009ee000) (0x40007e99a0) Stream removed, broadcasting: 1\nI0816 20:22:09.386746 801 log.go:172] (0x40009ee000) (0x4000964000) Stream removed, broadcasting: 3\nI0816 20:22:09.387133 801 log.go:172] (0x40009ee000) (0x40005e0000) Stream removed, broadcasting: 5\n" Aug 16 20:22:09.398: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 20:22:09.398: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 20:22:09.398: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 20:22:09.675: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 16 20:22:19.689: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 16 20:22:19.689: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 16 20:22:19.689: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 16 20:22:19.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997435s Aug 16 20:22:20.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98367128s Aug 16 20:22:21.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97502131s Aug 16 20:22:23.066: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.936079836s Aug 16 20:22:24.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.631528813s Aug 16 20:22:25.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.091494389s Aug 16 20:22:26.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.872011915s Aug 16 20:22:27.988: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.823297239s Aug 16 20:22:29.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 709.451914ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8429 Aug 16 20:22:30.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:22:32.353: INFO: stderr: "I0816 20:22:32.255955 823 log.go:172] (0x4000b44e70) (0x40009983c0) Create stream\nI0816 20:22:32.260109 823 log.go:172] (0x4000b44e70) (0x40009983c0) Stream added, broadcasting: 1\nI0816 20:22:32.277896 823 log.go:172] (0x4000b44e70) Reply frame received for 1\nI0816 20:22:32.278563 823 log.go:172] (0x4000b44e70) (0x4000652820) Create stream\nI0816 20:22:32.278635 823 log.go:172] (0x4000b44e70) (0x4000652820) Stream added, broadcasting: 3\nI0816 20:22:32.280282 823 log.go:172] (0x4000b44e70) Reply frame received for 3\nI0816 20:22:32.280670 823 log.go:172] (0x4000b44e70) (0x4000998000) Create stream\nI0816 20:22:32.280840 823 log.go:172] (0x4000b44e70) (0x4000998000) Stream added, broadcasting: 5\nI0816 20:22:32.282180 823 log.go:172] (0x4000b44e70) Reply frame received for 5\nI0816 20:22:32.334190 823 log.go:172] (0x4000b44e70) Data frame received for 5\nI0816 20:22:32.334583 823 log.go:172] (0x4000b44e70) Data frame received for 1\nI0816 20:22:32.334873 823 log.go:172] (0x40009983c0) (1) Data frame handling\nI0816 20:22:32.335049 823 log.go:172] (0x4000998000) (5) Data frame handling\nI0816 20:22:32.335179 823 log.go:172] (0x4000b44e70) Data frame received for 3\nI0816 20:22:32.335276 823 log.go:172] (0x4000652820) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 20:22:32.336576 823 log.go:172] (0x4000998000) (5) Data frame sent\nI0816 20:22:32.336686 823 log.go:172] (0x40009983c0) (1) Data frame sent\nI0816 20:22:32.336956 823 log.go:172] (0x4000652820) (3) Data frame sent\nI0816 20:22:32.337098 823 log.go:172] (0x4000b44e70) Data frame received for 3\nI0816 20:22:32.337183 823 log.go:172] (0x4000b44e70) Data frame received for 5\nI0816 20:22:32.337297 823 log.go:172] (0x4000998000) (5) Data frame handling\nI0816 20:22:32.337384 823 log.go:172] (0x4000652820) (3) Data frame handling\nI0816 20:22:32.338180 823 log.go:172] (0x4000b44e70) (0x40009983c0) Stream removed, broadcasting: 1\nI0816 20:22:32.340856 823 log.go:172] (0x4000b44e70) Go away received\nI0816 20:22:32.344041 823 log.go:172] (0x4000b44e70) (0x40009983c0) Stream removed, broadcasting: 1\nI0816 20:22:32.344397 823 log.go:172] (0x4000b44e70) (0x4000652820) Stream removed, broadcasting: 3\nI0816 20:22:32.344643 823 log.go:172] (0x4000b44e70) (0x4000998000) Stream removed, broadcasting: 5\n" Aug 16 20:22:32.354: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 20:22:32.354: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 20:22:32.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:22:33.819: INFO: stderr: "I0816 20:22:33.712603 847 log.go:172] (0x400011c2c0) (0x40007e19a0) Create stream\nI0816 20:22:33.716533 847 log.go:172] (0x400011c2c0) (0x40007e19a0) Stream added, broadcasting: 1\nI0816 20:22:33.732358 847 log.go:172] (0x400011c2c0) Reply frame received for 1\nI0816 20:22:33.733070 847 log.go:172] (0x400011c2c0) (0x4000782000) Create stream\nI0816 20:22:33.733138 847 log.go:172] (0x400011c2c0) (0x4000782000) Stream added, broadcasting: 3\nI0816 20:22:33.734848 847 log.go:172] (0x400011c2c0) Reply frame received for 3\nI0816 20:22:33.735155 847 log.go:172] (0x400011c2c0) (0x40007820a0) Create stream\nI0816 20:22:33.735225 847 log.go:172] (0x400011c2c0) (0x40007820a0) Stream added, broadcasting: 5\nI0816 20:22:33.736423 847 log.go:172] (0x400011c2c0) Reply frame received for 5\nI0816 20:22:33.797135 847 log.go:172] (0x400011c2c0) Data frame received for 5\nI0816 20:22:33.797632 847 log.go:172] (0x400011c2c0) Data frame received for 3\nI0816 20:22:33.798027 847 log.go:172] (0x400011c2c0) Data frame received for 1\nI0816 20:22:33.798218 847 log.go:172] (0x40007e19a0) (1) Data frame handling\nI0816 20:22:33.798445 847 log.go:172] (0x4000782000) (3) Data frame handling\nI0816 20:22:33.798648 847 log.go:172] (0x40007820a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 20:22:33.800573 847 log.go:172] (0x40007820a0) (5) Data frame sent\nI0816 20:22:33.800806 847 log.go:172] (0x4000782000) (3) Data frame sent\nI0816 20:22:33.801349 847 log.go:172] (0x400011c2c0) Data frame received for 3\nI0816 20:22:33.801465 847 log.go:172] (0x4000782000) (3) Data frame handling\nI0816 20:22:33.801626 847 log.go:172] (0x40007e19a0) (1) Data frame sent\nI0816 20:22:33.801884 847 log.go:172] (0x400011c2c0) Data frame received for 5\nI0816 20:22:33.802480 847 log.go:172] (0x400011c2c0) (0x40007e19a0) Stream removed, broadcasting: 1\nI0816 20:22:33.804015 847 log.go:172] (0x40007820a0) (5) Data frame handling\nI0816 20:22:33.804371 847 log.go:172] (0x400011c2c0) Go away received\nI0816 20:22:33.807077 847 log.go:172] (0x400011c2c0) (0x40007e19a0) Stream removed, broadcasting: 1\nI0816 20:22:33.807357 847 log.go:172] (0x400011c2c0) (0x4000782000) Stream removed, broadcasting: 3\nI0816 20:22:33.807537 847 log.go:172] (0x400011c2c0) (0x40007820a0) Stream removed, broadcasting: 5\n" Aug 16 20:22:33.820: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 20:22:33.820: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 20:22:33.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:22:35.588: INFO: rc: 1 Aug 16 20:22:35.589: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Aug 16 20:22:45.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:22:46.806: INFO: rc: 1 Aug 16 20:22:46.806: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:22:56.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:22:58.054: INFO: rc: 1 Aug 16 20:22:58.054: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:23:08.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:23:09.320: INFO: rc: 1 Aug 16 20:23:09.320: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:23:19.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:23:21.101: INFO: rc: 1 Aug 16 20:23:21.102: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:23:31.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:23:32.345: INFO: rc: 1 Aug 16 20:23:32.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:23:42.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:23:52.288: INFO: rc: 1 Aug 16 20:23:52.288: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:24:02.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:24:03.655: INFO: rc: 1 Aug 16 20:24:03.656: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:24:13.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:24:14.920: INFO: rc: 1 Aug 16 20:24:14.921: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:24:24.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:24:26.167: INFO: rc: 1 Aug 16 20:24:26.167: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:24:36.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:24:37.542: INFO: rc: 1 Aug 16 20:24:37.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:24:47.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:24:48.745: INFO: rc: 1 Aug 16 20:24:48.745: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:24:58.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:25:00.058: INFO: rc: 1 Aug 16 20:25:00.059: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:25:10.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:25:12.173: INFO: rc: 1 Aug 16 20:25:12.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:25:22.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:25:23.426: INFO: rc: 1 Aug 16 20:25:23.426: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:25:33.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:25:34.709: INFO: rc: 1 Aug 16 20:25:34.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:25:44.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:25:45.956: INFO: rc: 1 Aug 16 20:25:45.956: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:25:55.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:25:57.389: INFO: rc: 1 Aug 16 20:25:57.389: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:26:07.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:26:08.671: INFO: rc: 1 Aug 16 20:26:08.671: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:26:18.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:26:19.854: INFO: rc: 1 Aug 16 20:26:19.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:26:29.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:26:31.063: INFO: rc: 1 Aug 16 20:26:31.063: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:26:41.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:26:42.294: INFO: rc: 1 Aug 16 20:26:42.294: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:26:52.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:26:53.609: INFO: rc: 1 Aug 16 20:26:53.610: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:27:03.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:27:04.834: INFO: rc: 1 Aug 16 20:27:04.835: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:27:14.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:27:16.037: INFO: rc: 1 Aug 16 20:27:16.037: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:27:26.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:27:27.240: INFO: rc: 1 Aug 16 20:27:27.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 16 20:27:37.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:27:38.451: INFO: rc: 1 Aug 16 20:27:38.452: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Aug 16 20:27:38.452: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 16 20:27:38.471: INFO: Deleting all statefulset in ns statefulset-8429 Aug 16 20:27:38.475: INFO: Scaling statefulset ss to 0 Aug 16 20:27:38.482: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 20:27:38.485: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:27:38.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8429" for this suite. • [SLOW TEST:392.002 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":61,"skipped":894,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:27:38.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 16 20:27:38.590: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 16 20:27:38.646: INFO: Waiting for terminating namespaces to be deleted... Aug 16 20:27:38.651: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 16 20:27:38.676: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 16 20:27:38.677: INFO: Container kube-proxy ready: true, restart count 0 Aug 16 20:27:38.677: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 16 20:27:38.677: INFO: Container kindnet-cni ready: true, restart count 0 Aug 16 20:27:38.677: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 16 20:27:38.707: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 16 20:27:38.707: INFO: Container kube-proxy ready: true, restart count 0 Aug 16 20:27:38.707: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 16 20:27:38.707: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-01ae73e1-7562-478e-b5af-d07829d80c70 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-01ae73e1-7562-478e-b5af-d07829d80c70 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-01ae73e1-7562-478e-b5af-d07829d80c70 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:32:49.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-83" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:310.830 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":62,"skipped":906,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:32:49.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4911 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Aug 16 20:32:49.434: INFO: Found 0 stateful pods, waiting for 3 Aug 16 20:32:59.442: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:32:59.442: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:32:59.442: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 16 20:33:09.441: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:33:09.441: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:33:09.441: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 16 20:33:09.472: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 16 20:33:19.933: INFO: Updating stateful set ss2 Aug 16 20:33:19.969: INFO: Waiting for Pod statefulset-4911/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 16 20:33:29.982: INFO: Waiting for Pod statefulset-4911/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 16 20:33:40.779: INFO: Found 2 stateful pods, waiting for 3 Aug 16 20:33:50.787: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:33:50.787: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:33:50.787: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 16 20:33:50.840: INFO: Updating stateful set ss2 Aug 16 20:33:50.855: INFO: Waiting for Pod statefulset-4911/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 16 20:34:00.865: INFO: Waiting for Pod statefulset-4911/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 16 20:34:10.895: INFO: Updating stateful set ss2 Aug 16 20:34:11.496: INFO: Waiting for StatefulSet statefulset-4911/ss2 to complete update Aug 16 20:34:11.497: INFO: Waiting for Pod statefulset-4911/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 16 20:34:21.509: INFO: Waiting for StatefulSet statefulset-4911/ss2 to complete update Aug 16 20:34:21.510: INFO: Waiting for Pod statefulset-4911/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 16 20:34:31.512: INFO: Deleting all statefulset in ns statefulset-4911 Aug 16 20:34:31.517: INFO: Scaling statefulset ss2 to 0 Aug 16 20:35:01.563: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 20:35:01.567: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:35:01.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4911" for this suite. • [SLOW TEST:132.255 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":63,"skipped":910,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:35:01.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6f950ede-e13f-4feb-9cd9-360dddc4f3a6 STEP: Creating a pod to test consume configMaps Aug 16 20:35:01.867: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4" in namespace "configmap-5531" to be "success or failure" Aug 16 20:35:01.892: INFO: Pod "pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.42441ms Aug 16 20:35:04.107: INFO: Pod "pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239266003s Aug 16 20:35:06.113: INFO: Pod "pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245914767s Aug 16 20:35:08.162: INFO: Pod "pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.294888318s STEP: Saw pod success Aug 16 20:35:08.163: INFO: Pod "pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4" satisfied condition "success or failure" Aug 16 20:35:08.167: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4 container configmap-volume-test: STEP: delete the pod Aug 16 20:35:08.574: INFO: Waiting for pod pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4 to disappear Aug 16 20:35:08.651: INFO: Pod pod-configmaps-4ce5d409-12ee-432c-bf11-772f171534d4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:35:08.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5531" for this suite. • [SLOW TEST:7.073 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":913,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:35:08.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-6gfd STEP: Creating a pod to test atomic-volume-subpath Aug 16 20:35:08.850: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6gfd" in namespace "subpath-113" to be "success or failure" Aug 16 20:35:08.941: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Pending", Reason="", readiness=false. Elapsed: 90.214748ms Aug 16 20:35:10.953: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102271868s Aug 16 20:35:13.430: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579012175s Aug 16 20:35:15.436: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 6.585130965s Aug 16 20:35:17.443: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 8.592300973s Aug 16 20:35:19.450: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 10.598944162s Aug 16 20:35:21.455: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 12.604768265s Aug 16 20:35:23.461: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 14.610615047s Aug 16 20:35:25.468: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 16.617407202s Aug 16 20:35:27.475: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 18.624123019s Aug 16 20:35:29.481: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 20.630723044s Aug 16 20:35:31.490: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 22.638925281s Aug 16 20:35:33.515: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Running", Reason="", readiness=true. Elapsed: 24.664363249s Aug 16 20:35:35.520: INFO: Pod "pod-subpath-test-downwardapi-6gfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.669885895s STEP: Saw pod success Aug 16 20:35:35.521: INFO: Pod "pod-subpath-test-downwardapi-6gfd" satisfied condition "success or failure" Aug 16 20:35:35.526: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-6gfd container test-container-subpath-downwardapi-6gfd: STEP: delete the pod Aug 16 20:35:35.725: INFO: Waiting for pod pod-subpath-test-downwardapi-6gfd to disappear Aug 16 20:35:35.742: INFO: Pod pod-subpath-test-downwardapi-6gfd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-6gfd Aug 16 20:35:35.743: INFO: Deleting pod "pod-subpath-test-downwardapi-6gfd" in namespace "subpath-113" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:35:35.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-113" for this suite. • [SLOW TEST:27.191 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":65,"skipped":921,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:35:35.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Aug 16 20:35:36.241: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3752" to be "success or failure" Aug 16 20:35:36.288: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 47.339541ms Aug 16 20:35:38.761: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519988977s Aug 16 20:35:40.766: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.524473937s Aug 16 20:35:44.352: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111252308s Aug 16 20:35:46.357: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115881582s Aug 16 20:35:48.362: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.121369449s Aug 16 20:35:50.368: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.127390844s STEP: Saw pod success Aug 16 20:35:50.369: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 16 20:35:50.509: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 16 20:35:50.546: INFO: Waiting for pod pod-host-path-test to disappear Aug 16 20:35:50.605: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:35:50.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3752" for this suite. • [SLOW TEST:14.837 seconds] [sig-storage] HostPath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":970,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:35:50.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 16 20:35:57.637: INFO: &Pod{ObjectMeta:{send-events-41ec5dbd-1cd2-47fb-816b-fee57272f31a events-218 /api/v1/namespaces/events-218/pods/send-events-41ec5dbd-1cd2-47fb-816b-fee57272f31a f2c824f4-6d80-41e4-9345-88fd963faefa 493165 0 2020-08-16 20:35:51 +0000 UTC map[name:foo time:372106153] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5785f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5785f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5785f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 20:35:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 20:35:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 20:35:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 20:35:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.153,StartTime:2020-08-16 20:35:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 20:35:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9655c2d6cac60f0715dfa38516722c5acec306205e9aecc3006decfb22c5680f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 16 20:35:59.662: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 16 20:36:01.670: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:36:01.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-218" for this suite. • [SLOW TEST:10.970 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":67,"skipped":1013,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:36:01.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Aug 16 20:36:01.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 16 20:36:06.458: INFO: stderr: "" Aug 16 20:36:06.458: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:36:06.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7408" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":68,"skipped":1056,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:36:06.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-btvv STEP: Creating a pod to test atomic-volume-subpath Aug 16 20:36:06.583: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-btvv" in namespace "subpath-5958" to be "success or failure" Aug 16 20:36:06.598: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.950195ms Aug 16 20:36:08.605: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021813769s Aug 16 20:36:10.612: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028231433s Aug 16 20:36:12.619: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 6.035873883s Aug 16 20:36:14.770: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 8.186319094s Aug 16 20:36:16.777: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 10.193168007s Aug 16 20:36:18.782: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 12.198676765s Aug 16 20:36:20.789: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 14.205799509s Aug 16 20:36:22.880: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 16.296293089s Aug 16 20:36:24.964: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 18.380720247s Aug 16 20:36:27.138: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 20.554200385s Aug 16 20:36:29.144: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 22.560109067s Aug 16 20:36:31.185: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Running", Reason="", readiness=true. Elapsed: 24.601722533s Aug 16 20:36:33.738: INFO: Pod "pod-subpath-test-configmap-btvv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.154040921s STEP: Saw pod success Aug 16 20:36:33.738: INFO: Pod "pod-subpath-test-configmap-btvv" satisfied condition "success or failure" Aug 16 20:36:34.420: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-btvv container test-container-subpath-configmap-btvv: STEP: delete the pod Aug 16 20:36:36.689: INFO: Waiting for pod pod-subpath-test-configmap-btvv to disappear Aug 16 20:36:37.222: INFO: Pod pod-subpath-test-configmap-btvv no longer exists STEP: Deleting pod pod-subpath-test-configmap-btvv Aug 16 20:36:37.222: INFO: Deleting pod "pod-subpath-test-configmap-btvv" in namespace "subpath-5958" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:36:37.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5958" for this suite. • [SLOW TEST:31.703 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":69,"skipped":1056,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:36:38.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 16 20:36:39.368: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-a ebf98f67-1e62-4100-903d-ccf77a9519f8 493322 0 2020-08-16 20:36:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 16 20:36:39.369: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-a ebf98f67-1e62-4100-903d-ccf77a9519f8 493322 0 2020-08-16 20:36:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 16 20:36:49.381: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-a ebf98f67-1e62-4100-903d-ccf77a9519f8 493367 0 2020-08-16 20:36:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 16 20:36:49.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-a ebf98f67-1e62-4100-903d-ccf77a9519f8 493367 0 2020-08-16 20:36:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 16 20:36:59.394: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-a ebf98f67-1e62-4100-903d-ccf77a9519f8 493397 0 2020-08-16 20:36:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 16 20:36:59.394: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-a ebf98f67-1e62-4100-903d-ccf77a9519f8 493397 0 2020-08-16 20:36:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 16 20:37:09.433: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-a ebf98f67-1e62-4100-903d-ccf77a9519f8 493425 0 2020-08-16 20:36:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 16 20:37:09.434: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-a ebf98f67-1e62-4100-903d-ccf77a9519f8 493425 0 2020-08-16 20:36:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 16 20:37:19.452: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-b 4a87f691-5fec-4d1f-b2f1-0a3d6c4a47fd 493454 0 2020-08-16 20:37:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 16 20:37:19.453: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-b 4a87f691-5fec-4d1f-b2f1-0a3d6c4a47fd 493454 0 2020-08-16 20:37:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 16 20:37:29.461: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-b 4a87f691-5fec-4d1f-b2f1-0a3d6c4a47fd 493483 0 2020-08-16 20:37:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 16 20:37:29.461: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-903 /api/v1/namespaces/watch-903/configmaps/e2e-watch-test-configmap-b 4a87f691-5fec-4d1f-b2f1-0a3d6c4a47fd 493483 0 2020-08-16 20:37:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:37:39.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-903" for this suite. • [SLOW TEST:61.297 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":70,"skipped":1098,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:37:39.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run default /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 16 20:37:39.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7745' Aug 16 20:37:41.123: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 16 20:37:41.123: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496 Aug 16 20:37:41.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7745' Aug 16 20:37:42.466: INFO: stderr: "" Aug 16 20:37:42.467: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:37:42.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7745" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":71,"skipped":1107,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:37:42.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 20:37:45.319: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 20:37:47.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 20:37:50.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:37:50.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3699" for this suite. STEP: Destroying namespace "webhook-3699-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.639 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":72,"skipped":1116,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:37:51.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:38:11.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5942" for this suite. • [SLOW TEST:20.529 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":73,"skipped":1123,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:38:11.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1646/configmap-test-4ba6c610-470d-42fb-9fbf-ca792fe047ff STEP: Creating a pod to test consume configMaps Aug 16 20:38:12.335: INFO: Waiting up to 5m0s for pod "pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660" in namespace "configmap-1646" to be "success or failure" Aug 16 20:38:12.382: INFO: Pod "pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660": Phase="Pending", Reason="", readiness=false. Elapsed: 46.92833ms Aug 16 20:38:14.425: INFO: Pod "pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089648607s Aug 16 20:38:16.510: INFO: Pod "pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174009529s Aug 16 20:38:18.515: INFO: Pod "pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17914754s STEP: Saw pod success Aug 16 20:38:18.515: INFO: Pod "pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660" satisfied condition "success or failure" Aug 16 20:38:18.518: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660 container env-test: STEP: delete the pod Aug 16 20:38:18.549: INFO: Waiting for pod pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660 to disappear Aug 16 20:38:18.604: INFO: Pod pod-configmaps-27b21147-cf64-4c3e-aa78-77f5bef68660 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:38:18.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1646" for this suite. • [SLOW TEST:6.962 seconds] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1125,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:38:18.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-06bada4c-6001-4ee1-906a-74489c22d6b6 STEP: Creating a pod to test consume configMaps Aug 16 20:38:19.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52" in namespace "configmap-9344" to be "success or failure" Aug 16 20:38:19.279: INFO: Pod "pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52": Phase="Pending", Reason="", readiness=false. Elapsed: 5.251759ms Aug 16 20:38:21.285: INFO: Pod "pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010735099s Aug 16 20:38:23.290: INFO: Pod "pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52": Phase="Running", Reason="", readiness=true. Elapsed: 4.016338715s Aug 16 20:38:27.117: INFO: Pod "pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.843471056s STEP: Saw pod success Aug 16 20:38:27.118: INFO: Pod "pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52" satisfied condition "success or failure" Aug 16 20:38:27.318: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52 container configmap-volume-test: STEP: delete the pod Aug 16 20:38:27.472: INFO: Waiting for pod pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52 to disappear Aug 16 20:38:27.482: INFO: Pod pod-configmaps-e1c4fd9f-3f15-4641-81df-23b03c6a1d52 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:38:27.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9344" for this suite. • [SLOW TEST:8.877 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1129,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:38:27.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 16 20:38:27.591: INFO: Waiting up to 5m0s for pod "pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1" in namespace "emptydir-1786" to be "success or failure" Aug 16 20:38:27.602: INFO: Pod "pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.015925ms Aug 16 20:38:29.697: INFO: Pod "pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105682004s Aug 16 20:38:31.702: INFO: Pod "pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1": Phase="Running", Reason="", readiness=true. Elapsed: 4.110524013s Aug 16 20:38:33.706: INFO: Pod "pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11452333s STEP: Saw pod success Aug 16 20:38:33.706: INFO: Pod "pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1" satisfied condition "success or failure" Aug 16 20:38:33.709: INFO: Trying to get logs from node jerma-worker2 pod pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1 container test-container: STEP: delete the pod Aug 16 20:38:33.791: INFO: Waiting for pod pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1 to disappear Aug 16 20:38:33.794: INFO: Pod pod-9e5866f9-5c9a-4c67-9bbb-5f60695d53c1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:38:33.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1786" for this suite. • [SLOW TEST:6.309 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1136,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:38:33.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 16 20:38:39.145: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:38:39.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-833" for this suite. • [SLOW TEST:5.455 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":77,"skipped":1179,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:38:39.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:38:39.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106" in namespace "downward-api-6992" to be "success or failure" Aug 16 20:38:39.437: INFO: Pod "downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106": Phase="Pending", Reason="", readiness=false. Elapsed: 28.519204ms Aug 16 20:38:41.509: INFO: Pod "downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100528867s Aug 16 20:38:43.515: INFO: Pod "downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106292719s Aug 16 20:38:45.725: INFO: Pod "downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.315980031s STEP: Saw pod success Aug 16 20:38:45.725: INFO: Pod "downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106" satisfied condition "success or failure" Aug 16 20:38:45.728: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106 container client-container: STEP: delete the pod Aug 16 20:38:45.987: INFO: Waiting for pod downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106 to disappear Aug 16 20:38:46.107: INFO: Pod downwardapi-volume-e1d0eccc-dc90-4c58-8f65-3877f941e106 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:38:46.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6992" for this suite. • [SLOW TEST:6.854 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1206,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:38:46.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-6056 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6056 STEP: Deleting pre-stop pod Aug 16 20:39:04.380: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:39:04.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6056" for this suite. • [SLOW TEST:18.493 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":79,"skipped":1253,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:39:04.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-kbx6 STEP: Creating a pod to test atomic-volume-subpath Aug 16 20:39:05.853: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kbx6" in namespace "subpath-8216" to be "success or failure" Aug 16 20:39:05.862: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415241ms Aug 16 20:39:08.050: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19667575s Aug 16 20:39:10.084: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 4.231004562s Aug 16 20:39:12.350: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 6.496227986s Aug 16 20:39:14.355: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 8.501522931s Aug 16 20:39:16.372: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 10.518439596s Aug 16 20:39:18.704: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 12.850239902s Aug 16 20:39:20.710: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 14.85690387s Aug 16 20:39:22.717: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 16.863656058s Aug 16 20:39:24.724: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 18.870709466s Aug 16 20:39:26.732: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 20.878197543s Aug 16 20:39:28.762: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 22.908121456s Aug 16 20:39:30.773: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Running", Reason="", readiness=true. Elapsed: 24.919954827s Aug 16 20:39:32.780: INFO: Pod "pod-subpath-test-secret-kbx6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.926650621s STEP: Saw pod success Aug 16 20:39:32.780: INFO: Pod "pod-subpath-test-secret-kbx6" satisfied condition "success or failure" Aug 16 20:39:32.790: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-kbx6 container test-container-subpath-secret-kbx6: STEP: delete the pod Aug 16 20:39:32.895: INFO: Waiting for pod pod-subpath-test-secret-kbx6 to disappear Aug 16 20:39:32.939: INFO: Pod pod-subpath-test-secret-kbx6 no longer exists STEP: Deleting pod pod-subpath-test-secret-kbx6 Aug 16 20:39:32.939: INFO: Deleting pod "pod-subpath-test-secret-kbx6" in namespace "subpath-8216" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:39:32.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8216" for this suite. • [SLOW TEST:28.367 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":80,"skipped":1273,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:39:32.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2699.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2699.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2699.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2699.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2699.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2699.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 16 20:39:43.724: INFO: DNS probes using dns-2699/dns-test-edb0deb6-08e9-41e5-b572-b9079c5b3202 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:39:44.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2699" for this suite. • [SLOW TEST:13.811 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":81,"skipped":1274,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:39:46.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:39:47.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43" in namespace "downward-api-3896" to be "success or failure" Aug 16 20:39:48.240: INFO: Pod "downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43": Phase="Pending", Reason="", readiness=false. Elapsed: 249.80735ms Aug 16 20:39:50.415: INFO: Pod "downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.424784559s Aug 16 20:39:53.071: INFO: Pod "downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43": Phase="Pending", Reason="", readiness=false. Elapsed: 5.080151041s Aug 16 20:39:55.076: INFO: Pod "downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43": Phase="Running", Reason="", readiness=true. Elapsed: 7.08584621s Aug 16 20:39:57.081: INFO: Pod "downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.090478127s STEP: Saw pod success Aug 16 20:39:57.081: INFO: Pod "downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43" satisfied condition "success or failure" Aug 16 20:39:57.086: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43 container client-container: STEP: delete the pod Aug 16 20:39:57.145: INFO: Waiting for pod downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43 to disappear Aug 16 20:39:57.160: INFO: Pod downwardapi-volume-026b5077-3d93-4718-b3c0-15b0206d5a43 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:39:57.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3896" for this suite. • [SLOW TEST:10.372 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1280,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:39:57.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-19a15b13-a5c0-4d6a-8717-69b3207580aa STEP: Creating configMap with name cm-test-opt-upd-b11c0f6b-a0a0-4cae-9f36-a86d208e6746 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-19a15b13-a5c0-4d6a-8717-69b3207580aa STEP: Updating configmap cm-test-opt-upd-b11c0f6b-a0a0-4cae-9f36-a86d208e6746 STEP: Creating configMap with name cm-test-opt-create-173f629e-d4c8-44d6-9e1c-b7214fcd45d7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:41:29.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4273" for this suite. • [SLOW TEST:92.628 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1284,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:41:29.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1868 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-1868 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1868 Aug 16 20:41:30.134: INFO: Found 0 stateful pods, waiting for 1 Aug 16 20:41:40.162: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 16 20:41:40.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1868 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 20:41:42.004: INFO: stderr: "I0816 20:41:41.490595 1572 log.go:172] (0x4000ac20b0) (0x4000839c20) Create stream\nI0816 20:41:41.497569 1572 log.go:172] (0x4000ac20b0) (0x4000839c20) Stream added, broadcasting: 1\nI0816 20:41:41.511393 1572 log.go:172] (0x4000ac20b0) Reply frame received for 1\nI0816 20:41:41.512572 1572 log.go:172] (0x4000ac20b0) (0x4000839cc0) Create stream\nI0816 20:41:41.512662 1572 log.go:172] (0x4000ac20b0) (0x4000839cc0) Stream added, broadcasting: 3\nI0816 20:41:41.514558 1572 log.go:172] (0x4000ac20b0) Reply frame received for 3\nI0816 20:41:41.514796 1572 log.go:172] (0x4000ac20b0) (0x400075a000) Create stream\nI0816 20:41:41.514851 1572 log.go:172] (0x4000ac20b0) (0x400075a000) Stream added, broadcasting: 5\nI0816 20:41:41.516202 1572 log.go:172] (0x4000ac20b0) Reply frame received for 5\nI0816 20:41:41.571689 1572 log.go:172] (0x4000ac20b0) Data frame received for 5\nI0816 20:41:41.571888 1572 log.go:172] (0x400075a000) (5) Data frame handling\nI0816 20:41:41.572336 1572 log.go:172] (0x400075a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 20:41:41.985464 1572 log.go:172] (0x4000ac20b0) Data frame received for 3\nI0816 20:41:41.985757 1572 log.go:172] (0x4000839cc0) (3) Data frame handling\nI0816 20:41:41.985955 1572 log.go:172] (0x4000839cc0) (3) Data frame sent\nI0816 20:41:41.986148 1572 log.go:172] (0x4000ac20b0) Data frame received for 3\nI0816 20:41:41.986536 1572 log.go:172] (0x4000839cc0) (3) Data frame handling\nI0816 20:41:41.988677 1572 log.go:172] (0x4000ac20b0) Data frame received for 5\nI0816 20:41:41.989134 1572 log.go:172] (0x400075a000) (5) Data frame handling\nI0816 20:41:41.989272 1572 log.go:172] (0x4000ac20b0) Data frame received for 1\nI0816 20:41:41.989351 1572 log.go:172] (0x4000839c20) (1) Data frame handling\nI0816 20:41:41.989442 1572 log.go:172] (0x4000839c20) (1) Data frame sent\nI0816 20:41:41.990364 1572 log.go:172] (0x4000ac20b0) (0x4000839c20) Stream removed, broadcasting: 1\nI0816 20:41:41.992923 1572 log.go:172] (0x4000ac20b0) Go away received\nI0816 20:41:41.995779 1572 log.go:172] (0x4000ac20b0) (0x4000839c20) Stream removed, broadcasting: 1\nI0816 20:41:41.995990 1572 log.go:172] (0x4000ac20b0) (0x4000839cc0) Stream removed, broadcasting: 3\nI0816 20:41:41.996134 1572 log.go:172] (0x4000ac20b0) (0x400075a000) Stream removed, broadcasting: 5\n" Aug 16 20:41:42.005: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 20:41:42.005: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 20:41:42.093: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 16 20:41:52.152: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 16 20:41:52.152: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 20:41:52.439: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:41:52.439: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:41:52.439: INFO: Aug 16 20:41:52.439: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 16 20:41:53.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.72249011s Aug 16 20:41:54.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.714742204s Aug 16 20:41:55.518: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.686479353s Aug 16 20:41:56.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.644241814s Aug 16 20:41:57.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.553353953s Aug 16 20:41:59.245: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.326625983s Aug 16 20:42:00.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.916640743s Aug 16 20:42:01.436: INFO: Verifying statefulset ss doesn't scale past 3 for another 820.371661ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1868 Aug 16 20:42:02.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1868 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:42:03.939: INFO: stderr: "I0816 20:42:03.813194 1596 log.go:172] (0x4000a32000) (0x40007b3ae0) Create stream\nI0816 20:42:03.815565 1596 log.go:172] (0x4000a32000) (0x40007b3ae0) Stream added, broadcasting: 1\nI0816 20:42:03.825060 1596 log.go:172] (0x4000a32000) Reply frame received for 1\nI0816 20:42:03.826396 1596 log.go:172] (0x4000a32000) (0x4000a30000) Create stream\nI0816 20:42:03.826515 1596 log.go:172] (0x4000a32000) (0x4000a30000) Stream added, broadcasting: 3\nI0816 20:42:03.834913 1596 log.go:172] (0x4000a32000) Reply frame received for 3\nI0816 20:42:03.835270 1596 log.go:172] (0x4000a32000) (0x40007b3cc0) Create stream\nI0816 20:42:03.835344 1596 log.go:172] (0x4000a32000) (0x40007b3cc0) Stream added, broadcasting: 5\nI0816 20:42:03.837258 1596 log.go:172] (0x4000a32000) Reply frame received for 5\nI0816 20:42:03.918355 1596 log.go:172] (0x4000a32000) Data frame received for 5\nI0816 20:42:03.918731 1596 log.go:172] (0x4000a32000) Data frame received for 3\nI0816 20:42:03.918906 1596 log.go:172] (0x4000a30000) (3) Data frame handling\nI0816 20:42:03.919053 1596 log.go:172] (0x4000a32000) Data frame received for 1\nI0816 20:42:03.919145 1596 log.go:172] (0x40007b3ae0) (1) Data frame handling\nI0816 20:42:03.919266 1596 log.go:172] (0x40007b3cc0) (5) Data frame handling\nI0816 20:42:03.919770 1596 log.go:172] (0x40007b3ae0) (1) Data frame sent\nI0816 20:42:03.920428 1596 log.go:172] (0x4000a30000) (3) Data frame sent\nI0816 20:42:03.920821 1596 log.go:172] (0x4000a32000) Data frame received for 3\nI0816 20:42:03.920918 1596 log.go:172] (0x4000a30000) (3) Data frame handling\nI0816 20:42:03.921256 1596 log.go:172] (0x40007b3cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 20:42:03.921345 1596 log.go:172] (0x4000a32000) Data frame received for 5\nI0816 20:42:03.922105 1596 log.go:172] (0x4000a32000) (0x40007b3ae0) Stream removed, broadcasting: 1\nI0816 20:42:03.923043 1596 log.go:172] (0x40007b3cc0) (5) Data frame handling\nI0816 20:42:03.924815 1596 log.go:172] (0x4000a32000) Go away received\nI0816 20:42:03.926843 1596 log.go:172] (0x4000a32000) (0x40007b3ae0) Stream removed, broadcasting: 1\nI0816 20:42:03.927421 1596 log.go:172] (0x4000a32000) (0x4000a30000) Stream removed, broadcasting: 3\nI0816 20:42:03.927808 1596 log.go:172] (0x4000a32000) (0x40007b3cc0) Stream removed, broadcasting: 5\n" Aug 16 20:42:03.940: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 20:42:03.940: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 20:42:03.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1868 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:42:05.481: INFO: stderr: "I0816 20:42:05.388632 1621 log.go:172] (0x40009e2000) (0x400070fc20) Create stream\nI0816 20:42:05.391930 1621 log.go:172] (0x40009e2000) (0x400070fc20) Stream added, broadcasting: 1\nI0816 20:42:05.403708 1621 log.go:172] (0x40009e2000) Reply frame received for 1\nI0816 20:42:05.404323 1621 log.go:172] (0x40009e2000) (0x4000792000) Create stream\nI0816 20:42:05.404391 1621 log.go:172] (0x40009e2000) (0x4000792000) Stream added, broadcasting: 3\nI0816 20:42:05.405841 1621 log.go:172] (0x40009e2000) Reply frame received for 3\nI0816 20:42:05.406076 1621 log.go:172] (0x40009e2000) (0x40006d40a0) Create stream\nI0816 20:42:05.406132 1621 log.go:172] (0x40009e2000) (0x40006d40a0) Stream added, broadcasting: 5\nI0816 20:42:05.407488 1621 log.go:172] (0x40009e2000) Reply frame received for 5\nI0816 20:42:05.460638 1621 log.go:172] (0x40009e2000) Data frame received for 5\nI0816 20:42:05.461088 1621 log.go:172] (0x40009e2000) Data frame received for 3\nI0816 20:42:05.461235 1621 log.go:172] (0x4000792000) (3) Data frame handling\nI0816 20:42:05.461437 1621 log.go:172] (0x40006d40a0) (5) Data frame handling\nI0816 20:42:05.461617 1621 log.go:172] (0x40009e2000) Data frame received for 1\nI0816 20:42:05.461733 1621 log.go:172] (0x400070fc20) (1) Data frame handling\nI0816 20:42:05.462426 1621 log.go:172] (0x40006d40a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0816 20:42:05.462627 1621 log.go:172] (0x400070fc20) (1) Data frame sent\nI0816 20:42:05.462874 1621 log.go:172] (0x4000792000) (3) Data frame sent\nI0816 20:42:05.463123 1621 log.go:172] (0x40009e2000) Data frame received for 5\nI0816 20:42:05.463210 1621 log.go:172] (0x40006d40a0) (5) Data frame handling\nI0816 20:42:05.463365 1621 log.go:172] (0x40009e2000) Data frame received for 3\nI0816 20:42:05.463475 1621 log.go:172] (0x4000792000) (3) Data frame handling\nI0816 20:42:05.466029 1621 log.go:172] (0x40009e2000) (0x400070fc20) Stream removed, broadcasting: 1\nI0816 20:42:05.468119 1621 log.go:172] (0x40009e2000) Go away received\nI0816 20:42:05.471480 1621 log.go:172] (0x40009e2000) (0x400070fc20) Stream removed, broadcasting: 1\nI0816 20:42:05.471789 1621 log.go:172] (0x40009e2000) (0x4000792000) Stream removed, broadcasting: 3\nI0816 20:42:05.471994 1621 log.go:172] (0x40009e2000) (0x40006d40a0) Stream removed, broadcasting: 5\n" Aug 16 20:42:05.482: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 20:42:05.482: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 20:42:05.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1868 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 16 20:42:06.916: INFO: stderr: "I0816 20:42:06.833716 1644 log.go:172] (0x4000a440b0) (0x4000708140) Create stream\nI0816 20:42:06.836228 1644 log.go:172] (0x4000a440b0) (0x4000708140) Stream added, broadcasting: 1\nI0816 20:42:06.845570 1644 log.go:172] (0x4000a440b0) Reply frame received for 1\nI0816 20:42:06.846237 1644 log.go:172] (0x4000a440b0) (0x4000770000) Create stream\nI0816 20:42:06.846300 1644 log.go:172] (0x4000a440b0) (0x4000770000) Stream added, broadcasting: 3\nI0816 20:42:06.847988 1644 log.go:172] (0x4000a440b0) Reply frame received for 3\nI0816 20:42:06.848449 1644 log.go:172] (0x4000a440b0) (0x400077c000) Create stream\nI0816 20:42:06.848544 1644 log.go:172] (0x4000a440b0) (0x400077c000) Stream added, broadcasting: 5\nI0816 20:42:06.850496 1644 log.go:172] (0x4000a440b0) Reply frame received for 5\nI0816 20:42:06.901019 1644 log.go:172] (0x4000a440b0) Data frame received for 3\nI0816 20:42:06.901359 1644 log.go:172] (0x4000a440b0) Data frame received for 1\nI0816 20:42:06.901475 1644 log.go:172] (0x4000770000) (3) Data frame handling\nI0816 20:42:06.901692 1644 log.go:172] (0x4000a440b0) Data frame received for 5\nI0816 20:42:06.901817 1644 log.go:172] (0x400077c000) (5) Data frame handling\nI0816 20:42:06.901916 1644 log.go:172] (0x4000708140) (1) Data frame handling\nI0816 20:42:06.902543 1644 log.go:172] (0x400077c000) (5) Data frame sent\nI0816 20:42:06.902619 1644 log.go:172] (0x4000708140) (1) Data frame sent\nI0816 20:42:06.902786 1644 log.go:172] (0x4000770000) (3) Data frame sent\nI0816 20:42:06.903174 1644 log.go:172] (0x4000a440b0) Data frame received for 5\nI0816 20:42:06.903281 1644 log.go:172] (0x400077c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0816 20:42:06.903335 1644 log.go:172] (0x4000a440b0) Data frame received for 3\nI0816 20:42:06.903393 1644 log.go:172] (0x4000770000) (3) Data frame handling\nI0816 20:42:06.904513 1644 log.go:172] (0x4000a440b0) (0x4000708140) Stream removed, broadcasting: 1\nI0816 20:42:06.906300 1644 log.go:172] (0x4000a440b0) Go away received\nI0816 20:42:06.909012 1644 log.go:172] (0x4000a440b0) (0x4000708140) Stream removed, broadcasting: 1\nI0816 20:42:06.909284 1644 log.go:172] (0x4000a440b0) (0x4000770000) Stream removed, broadcasting: 3\nI0816 20:42:06.909502 1644 log.go:172] (0x4000a440b0) (0x400077c000) Stream removed, broadcasting: 5\n" Aug 16 20:42:06.917: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 16 20:42:06.917: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 16 20:42:06.924: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:42:06.924: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 16 20:42:06.924: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 16 20:42:06.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1868 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 20:42:08.722: INFO: stderr: "I0816 20:42:08.618328 1667 log.go:172] (0x4000a40000) (0x400081fc20) Create stream\nI0816 20:42:08.623179 1667 log.go:172] (0x4000a40000) (0x400081fc20) Stream added, broadcasting: 1\nI0816 20:42:08.639119 1667 log.go:172] (0x4000a40000) Reply frame received for 1\nI0816 20:42:08.640263 1667 log.go:172] (0x4000a40000) (0x400099a000) Create stream\nI0816 20:42:08.640386 1667 log.go:172] (0x4000a40000) (0x400099a000) Stream added, broadcasting: 3\nI0816 20:42:08.642220 1667 log.go:172] (0x4000a40000) Reply frame received for 3\nI0816 20:42:08.642615 1667 log.go:172] (0x4000a40000) (0x400081fe00) Create stream\nI0816 20:42:08.642707 1667 log.go:172] (0x4000a40000) (0x400081fe00) Stream added, broadcasting: 5\nI0816 20:42:08.644030 1667 log.go:172] (0x4000a40000) Reply frame received for 5\nI0816 20:42:08.703759 1667 log.go:172] (0x4000a40000) Data frame received for 5\nI0816 20:42:08.704075 1667 log.go:172] (0x400081fe00) (5) Data frame handling\nI0816 20:42:08.704675 1667 log.go:172] (0x400081fe00) (5) Data frame sent\nI0816 20:42:08.705021 1667 log.go:172] (0x4000a40000) Data frame received for 5\nI0816 20:42:08.705118 1667 log.go:172] (0x400081fe00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 20:42:08.707886 1667 log.go:172] (0x4000a40000) Data frame received for 1\nI0816 20:42:08.707993 1667 log.go:172] (0x400081fc20) (1) Data frame handling\nI0816 20:42:08.708089 1667 log.go:172] (0x400081fc20) (1) Data frame sent\nI0816 20:42:08.708501 1667 log.go:172] (0x4000a40000) Data frame received for 3\nI0816 20:42:08.708711 1667 log.go:172] (0x4000a40000) (0x400081fc20) Stream removed, broadcasting: 1\nI0816 20:42:08.709141 1667 log.go:172] (0x400099a000) (3) Data frame handling\nI0816 20:42:08.709249 1667 log.go:172] (0x400099a000) (3) Data frame sent\nI0816 20:42:08.709340 1667 log.go:172] (0x4000a40000) Data frame received for 3\nI0816 20:42:08.709425 1667 log.go:172] (0x400099a000) (3) Data frame handling\nI0816 20:42:08.709687 1667 log.go:172] (0x4000a40000) Go away received\nI0816 20:42:08.711756 1667 log.go:172] (0x4000a40000) (0x400081fc20) Stream removed, broadcasting: 1\nI0816 20:42:08.712039 1667 log.go:172] (0x4000a40000) (0x400099a000) Stream removed, broadcasting: 3\nI0816 20:42:08.712490 1667 log.go:172] (0x4000a40000) (0x400081fe00) Stream removed, broadcasting: 5\n" Aug 16 20:42:08.722: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 20:42:08.722: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 20:42:08.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1868 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 20:42:10.283: INFO: stderr: "I0816 20:42:10.157914 1690 log.go:172] (0x4000a9c0b0) (0x4000ade000) Create stream\nI0816 20:42:10.163740 1690 log.go:172] (0x4000a9c0b0) (0x4000ade000) Stream added, broadcasting: 1\nI0816 20:42:10.172815 1690 log.go:172] (0x4000a9c0b0) Reply frame received for 1\nI0816 20:42:10.173364 1690 log.go:172] (0x4000a9c0b0) (0x4000af8000) Create stream\nI0816 20:42:10.173424 1690 log.go:172] (0x4000a9c0b0) (0x4000af8000) Stream added, broadcasting: 3\nI0816 20:42:10.174784 1690 log.go:172] (0x4000a9c0b0) Reply frame received for 3\nI0816 20:42:10.175019 1690 log.go:172] (0x4000a9c0b0) (0x4000ade0a0) Create stream\nI0816 20:42:10.175076 1690 log.go:172] (0x4000a9c0b0) (0x4000ade0a0) Stream added, broadcasting: 5\nI0816 20:42:10.176123 1690 log.go:172] (0x4000a9c0b0) Reply frame received for 5\nI0816 20:42:10.224373 1690 log.go:172] (0x4000a9c0b0) Data frame received for 5\nI0816 20:42:10.224566 1690 log.go:172] (0x4000ade0a0) (5) Data frame handling\nI0816 20:42:10.225020 1690 log.go:172] (0x4000ade0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 20:42:10.264206 1690 log.go:172] (0x4000a9c0b0) Data frame received for 3\nI0816 20:42:10.264352 1690 log.go:172] (0x4000af8000) (3) Data frame handling\nI0816 20:42:10.264417 1690 log.go:172] (0x4000af8000) (3) Data frame sent\nI0816 20:42:10.264507 1690 log.go:172] (0x4000a9c0b0) Data frame received for 5\nI0816 20:42:10.264623 1690 log.go:172] (0x4000ade0a0) (5) Data frame handling\nI0816 20:42:10.264867 1690 log.go:172] (0x4000a9c0b0) Data frame received for 3\nI0816 20:42:10.264974 1690 log.go:172] (0x4000af8000) (3) Data frame handling\nI0816 20:42:10.266453 1690 log.go:172] (0x4000a9c0b0) Data frame received for 1\nI0816 20:42:10.266560 1690 log.go:172] (0x4000ade000) (1) Data frame handling\nI0816 20:42:10.266684 1690 log.go:172] (0x4000ade000) (1) Data frame sent\nI0816 20:42:10.267781 1690 log.go:172] (0x4000a9c0b0) (0x4000ade000) Stream removed, broadcasting: 1\nI0816 20:42:10.270422 1690 log.go:172] (0x4000a9c0b0) Go away received\nI0816 20:42:10.273181 1690 log.go:172] (0x4000a9c0b0) (0x4000ade000) Stream removed, broadcasting: 1\nI0816 20:42:10.273468 1690 log.go:172] (0x4000a9c0b0) (0x4000af8000) Stream removed, broadcasting: 3\nI0816 20:42:10.273658 1690 log.go:172] (0x4000a9c0b0) (0x4000ade0a0) Stream removed, broadcasting: 5\n" Aug 16 20:42:10.284: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 20:42:10.284: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 20:42:10.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1868 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 16 20:42:11.835: INFO: stderr: "I0816 20:42:11.622816 1715 log.go:172] (0x4000125760) (0x400070a000) Create stream\nI0816 20:42:11.626167 1715 log.go:172] (0x4000125760) (0x400070a000) Stream added, broadcasting: 1\nI0816 20:42:11.636110 1715 log.go:172] (0x4000125760) Reply frame received for 1\nI0816 20:42:11.636712 1715 log.go:172] (0x4000125760) (0x4000748000) Create stream\nI0816 20:42:11.636852 1715 log.go:172] (0x4000125760) (0x4000748000) Stream added, broadcasting: 3\nI0816 20:42:11.638380 1715 log.go:172] (0x4000125760) Reply frame received for 3\nI0816 20:42:11.638627 1715 log.go:172] (0x4000125760) (0x40007480a0) Create stream\nI0816 20:42:11.638676 1715 log.go:172] (0x4000125760) (0x40007480a0) Stream added, broadcasting: 5\nI0816 20:42:11.640140 1715 log.go:172] (0x4000125760) Reply frame received for 5\nI0816 20:42:11.690982 1715 log.go:172] (0x4000125760) Data frame received for 5\nI0816 20:42:11.691256 1715 log.go:172] (0x40007480a0) (5) Data frame handling\nI0816 20:42:11.691862 1715 log.go:172] (0x40007480a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 20:42:11.808897 1715 log.go:172] (0x4000125760) Data frame received for 3\nI0816 20:42:11.809228 1715 log.go:172] (0x4000748000) (3) Data frame handling\nI0816 20:42:11.809454 1715 log.go:172] (0x4000125760) Data frame received for 5\nI0816 20:42:11.809671 1715 log.go:172] (0x40007480a0) (5) Data frame handling\nI0816 20:42:11.809940 1715 log.go:172] (0x4000748000) (3) Data frame sent\nI0816 20:42:11.810130 1715 log.go:172] (0x4000125760) Data frame received for 3\nI0816 20:42:11.810293 1715 log.go:172] (0x4000748000) (3) Data frame handling\nI0816 20:42:11.810599 1715 log.go:172] (0x4000125760) Data frame received for 1\nI0816 20:42:11.810750 1715 log.go:172] (0x400070a000) (1) Data frame handling\nI0816 20:42:11.810905 1715 log.go:172] (0x400070a000) (1) Data frame sent\nI0816 20:42:11.812567 1715 log.go:172] (0x4000125760) (0x400070a000) Stream removed, broadcasting: 1\nI0816 20:42:11.817156 1715 log.go:172] (0x4000125760) Go away received\nI0816 20:42:11.821115 1715 log.go:172] (0x4000125760) (0x400070a000) Stream removed, broadcasting: 1\nI0816 20:42:11.821727 1715 log.go:172] (0x4000125760) (0x4000748000) Stream removed, broadcasting: 3\nI0816 20:42:11.822292 1715 log.go:172] (0x4000125760) (0x40007480a0) Stream removed, broadcasting: 5\n" Aug 16 20:42:11.836: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 16 20:42:11.836: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 16 20:42:11.837: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 20:42:11.843: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 16 20:42:21.854: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 16 20:42:21.854: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 16 20:42:21.855: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 16 20:42:21.891: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:21.891: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:21.891: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:21.892: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:21.892: INFO: Aug 16 20:42:21.892: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:23.107: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:23.107: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:23.108: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:23.108: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:23.109: INFO: Aug 16 20:42:23.109: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:24.137: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:24.137: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:24.137: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:24.138: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:24.138: INFO: Aug 16 20:42:24.138: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:25.233: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:25.233: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:25.233: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:25.233: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:25.234: INFO: Aug 16 20:42:25.234: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:26.275: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:26.275: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:26.275: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:26.276: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:26.276: INFO: Aug 16 20:42:26.276: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:27.285: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:27.285: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:27.285: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:27.285: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:27.285: INFO: Aug 16 20:42:27.285: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:28.295: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:28.295: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:28.296: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:28.296: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:28.296: INFO: Aug 16 20:42:28.296: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:29.675: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:29.675: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:29.675: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:29.675: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:29.675: INFO: Aug 16 20:42:29.675: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:30.687: INFO: POD NODE PHASE GRACE CONDITIONS Aug 16 20:42:30.687: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:30 +0000 UTC }] Aug 16 20:42:30.687: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:30.687: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:42:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-16 20:41:52 +0000 UTC }] Aug 16 20:42:30.687: INFO: Aug 16 20:42:30.687: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 16 20:42:31.709: INFO: Verifying statefulset ss doesn't scale past 0 for another 174.488171ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1868 Aug 16 20:42:32.715: INFO: Scaling statefulset ss to 0 Aug 16 20:42:32.736: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 16 20:42:32.739: INFO: Deleting all statefulset in ns statefulset-1868 Aug 16 20:42:32.742: INFO: Scaling statefulset ss to 0 Aug 16 20:42:32.752: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 20:42:32.755: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:42:32.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1868" for this suite. • [SLOW TEST:63.047 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":84,"skipped":1312,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:42:32.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 16 20:42:41.801: INFO: Successfully updated pod "labelsupdate90dfbeac-f260-47f2-8903-7d18de7f64e7" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:42:43.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9174" for this suite. • [SLOW TEST:11.011 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1325,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:42:43.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-17a6f1ac-d8fd-4a12-9b00-e414abcf6007 STEP: Creating a pod to test consume secrets Aug 16 20:42:44.013: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a" in namespace "projected-7689" to be "success or failure" Aug 16 20:42:44.027: INFO: Pod "pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.228287ms Aug 16 20:42:46.249: INFO: Pod "pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236150372s Aug 16 20:42:48.279: INFO: Pod "pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265632768s Aug 16 20:42:50.449: INFO: Pod "pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.436231795s STEP: Saw pod success Aug 16 20:42:50.450: INFO: Pod "pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a" satisfied condition "success or failure" Aug 16 20:42:51.167: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a container secret-volume-test: STEP: delete the pod Aug 16 20:42:51.964: INFO: Waiting for pod pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a to disappear Aug 16 20:42:51.973: INFO: Pod pod-projected-secrets-2658ea4f-a2ef-4a6f-8d2a-ec7968637c1a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:42:51.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7689" for this suite. • [SLOW TEST:8.125 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1341,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:42:51.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-6e94f0bb-d075-4011-9330-d847dc8c9166 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:42:58.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7685" for this suite. • [SLOW TEST:6.613 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1356,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:42:58.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 16 20:42:58.656: INFO: Waiting up to 5m0s for pod "pod-7bc55a27-2730-4c44-980a-1b10a43ecf4a" in namespace "emptydir-730" to be "success or failure" Aug 16 20:42:58.708: INFO: Pod "pod-7bc55a27-2730-4c44-980a-1b10a43ecf4a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.033241ms Aug 16 20:43:00.715: INFO: Pod "pod-7bc55a27-2730-4c44-980a-1b10a43ecf4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058119573s Aug 16 20:43:02.721: INFO: Pod "pod-7bc55a27-2730-4c44-980a-1b10a43ecf4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064884699s STEP: Saw pod success Aug 16 20:43:02.722: INFO: Pod "pod-7bc55a27-2730-4c44-980a-1b10a43ecf4a" satisfied condition "success or failure" Aug 16 20:43:02.725: INFO: Trying to get logs from node jerma-worker pod pod-7bc55a27-2730-4c44-980a-1b10a43ecf4a container test-container: STEP: delete the pod Aug 16 20:43:02.810: INFO: Waiting for pod pod-7bc55a27-2730-4c44-980a-1b10a43ecf4a to disappear Aug 16 20:43:02.819: INFO: Pod pod-7bc55a27-2730-4c44-980a-1b10a43ecf4a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:43:02.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-730" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1395,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:43:02.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0816 20:43:04.232446 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 16 20:43:04.232: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:43:04.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-450" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":89,"skipped":1453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:43:04.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 16 20:43:04.713: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Aug 16 20:43:06.527: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 16 20:43:09.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 20:43:11.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 20:43:13.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733207386, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 20:43:16.139: INFO: Waited 632.726299ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:43:16.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5219" for this suite. • [SLOW TEST:12.455 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":90,"skipped":1453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:43:16.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:43:24.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8319" for this suite. STEP: Destroying namespace "nsdeletetest-427" for this suite. Aug 16 20:43:24.174: INFO: Namespace nsdeletetest-427 was already deleted STEP: Destroying namespace "nsdeletetest-1582" for this suite. • [SLOW TEST:7.470 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":91,"skipped":1471,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:43:24.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-9f860ed8-7e1a-4312-8397-b5941a421031 in namespace container-probe-266 Aug 16 20:43:28.619: INFO: Started pod test-webserver-9f860ed8-7e1a-4312-8397-b5941a421031 in namespace container-probe-266 STEP: checking the pod's current state and verifying that restartCount is present Aug 16 20:43:28.623: INFO: Initial restart count of pod test-webserver-9f860ed8-7e1a-4312-8397-b5941a421031 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:47:28.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-266" for this suite. • [SLOW TEST:244.704 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1476,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:47:28.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Aug 16 20:47:29.371: INFO: Waiting up to 5m0s for pod "var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8" in namespace "var-expansion-2213" to be "success or failure" Aug 16 20:47:29.379: INFO: Pod "var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.943936ms Aug 16 20:47:32.073: INFO: Pod "var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.702194237s Aug 16 20:47:34.108: INFO: Pod "var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.736645507s Aug 16 20:47:36.114: INFO: Pod "var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.742981128s STEP: Saw pod success Aug 16 20:47:36.114: INFO: Pod "var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8" satisfied condition "success or failure" Aug 16 20:47:36.118: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8 container dapi-container: STEP: delete the pod Aug 16 20:47:36.175: INFO: Waiting for pod var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8 to disappear Aug 16 20:47:36.187: INFO: Pod var-expansion-fb99ca86-577f-42cc-b349-5295f770c6b8 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:47:36.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2213" for this suite. • [SLOW TEST:7.330 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1477,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:47:36.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 16 20:47:36.273: INFO: Waiting up to 5m0s for pod "pod-3ca9e791-fe3d-489f-aba9-09755c3c5a79" in namespace "emptydir-652" to be "success or failure" Aug 16 20:47:36.283: INFO: Pod "pod-3ca9e791-fe3d-489f-aba9-09755c3c5a79": Phase="Pending", Reason="", readiness=false. Elapsed: 9.685753ms Aug 16 20:47:38.289: INFO: Pod "pod-3ca9e791-fe3d-489f-aba9-09755c3c5a79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016041378s Aug 16 20:47:40.308: INFO: Pod "pod-3ca9e791-fe3d-489f-aba9-09755c3c5a79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035120823s STEP: Saw pod success Aug 16 20:47:40.310: INFO: Pod "pod-3ca9e791-fe3d-489f-aba9-09755c3c5a79" satisfied condition "success or failure" Aug 16 20:47:40.314: INFO: Trying to get logs from node jerma-worker2 pod pod-3ca9e791-fe3d-489f-aba9-09755c3c5a79 container test-container: STEP: delete the pod Aug 16 20:47:40.400: INFO: Waiting for pod pod-3ca9e791-fe3d-489f-aba9-09755c3c5a79 to disappear Aug 16 20:47:40.413: INFO: Pod pod-3ca9e791-fe3d-489f-aba9-09755c3c5a79 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:47:40.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-652" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1485,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:47:40.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-7583de71-7889-4f36-bab5-9194b5efac11 STEP: Creating a pod to test consume configMaps Aug 16 20:47:40.859: INFO: Waiting up to 5m0s for pod "pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01" in namespace "configmap-4333" to be "success or failure" Aug 16 20:47:41.066: INFO: Pod "pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01": Phase="Pending", Reason="", readiness=false. Elapsed: 206.853774ms Aug 16 20:47:43.252: INFO: Pod "pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392487062s Aug 16 20:47:45.274: INFO: Pod "pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414887706s Aug 16 20:47:47.305: INFO: Pod "pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.445754123s STEP: Saw pod success Aug 16 20:47:47.305: INFO: Pod "pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01" satisfied condition "success or failure" Aug 16 20:47:47.309: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01 container configmap-volume-test: STEP: delete the pod Aug 16 20:47:47.343: INFO: Waiting for pod pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01 to disappear Aug 16 20:47:47.347: INFO: Pod pod-configmaps-0dd43d69-49e7-4158-9130-efa871a05e01 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:47:47.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4333" for this suite. • [SLOW TEST:6.894 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1494,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:47:47.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:48:47.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9949" for this suite. • [SLOW TEST:60.150 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1496,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:48:47.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7170 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7170 STEP: creating replication controller externalsvc in namespace services-7170 I0816 20:48:47.730400 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7170, replica count: 2 I0816 20:48:50.781837 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:48:53.782566 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 16 20:48:53.829: INFO: Creating new exec pod Aug 16 20:48:57.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7170 execpodn9626 -- /bin/sh -x -c nslookup clusterip-service' Aug 16 20:49:03.496: INFO: stderr: "I0816 20:49:03.378525 1739 log.go:172] (0x40007526e0) (0x400081d9a0) Create stream\nI0816 20:49:03.381113 1739 log.go:172] (0x40007526e0) (0x400081d9a0) Stream added, broadcasting: 1\nI0816 20:49:03.395213 1739 log.go:172] (0x40007526e0) Reply frame received for 1\nI0816 20:49:03.397271 1739 log.go:172] (0x40007526e0) (0x400081dc20) Create stream\nI0816 20:49:03.397412 1739 log.go:172] (0x40007526e0) (0x400081dc20) Stream added, broadcasting: 3\nI0816 20:49:03.399330 1739 log.go:172] (0x40007526e0) Reply frame received for 3\nI0816 20:49:03.399644 1739 log.go:172] (0x40007526e0) (0x400081dcc0) Create stream\nI0816 20:49:03.399719 1739 log.go:172] (0x40007526e0) (0x400081dcc0) Stream added, broadcasting: 5\nI0816 20:49:03.401635 1739 log.go:172] (0x40007526e0) Reply frame received for 5\nI0816 20:49:03.462475 1739 log.go:172] (0x40007526e0) Data frame received for 5\nI0816 20:49:03.463050 1739 log.go:172] (0x400081dcc0) (5) Data frame handling\n+ nslookup clusterip-service\nI0816 20:49:03.465249 1739 log.go:172] (0x400081dcc0) (5) Data frame sent\nI0816 20:49:03.473740 1739 log.go:172] (0x40007526e0) Data frame received for 3\nI0816 20:49:03.473829 1739 log.go:172] (0x400081dc20) (3) Data frame handling\nI0816 20:49:03.473928 1739 log.go:172] (0x400081dc20) (3) Data frame sent\nI0816 20:49:03.474505 1739 log.go:172] (0x40007526e0) Data frame received for 3\nI0816 20:49:03.474617 1739 log.go:172] (0x400081dc20) (3) Data frame handling\nI0816 20:49:03.474745 1739 log.go:172] (0x400081dc20) (3) Data frame sent\nI0816 20:49:03.475079 1739 log.go:172] (0x40007526e0) Data frame received for 5\nI0816 20:49:03.475246 1739 log.go:172] (0x40007526e0) Data frame received for 3\nI0816 20:49:03.475393 1739 log.go:172] (0x400081dc20) (3) Data frame handling\nI0816 20:49:03.475531 1739 log.go:172] (0x400081dcc0) (5) Data frame handling\nI0816 20:49:03.477259 1739 log.go:172] (0x40007526e0) Data frame received for 1\nI0816 20:49:03.477388 1739 log.go:172] (0x400081d9a0) (1) Data frame handling\nI0816 20:49:03.477562 1739 log.go:172] (0x400081d9a0) (1) Data frame sent\nI0816 20:49:03.478196 1739 log.go:172] (0x40007526e0) (0x400081d9a0) Stream removed, broadcasting: 1\nI0816 20:49:03.481130 1739 log.go:172] (0x40007526e0) Go away received\nI0816 20:49:03.483758 1739 log.go:172] (0x40007526e0) (0x400081d9a0) Stream removed, broadcasting: 1\nI0816 20:49:03.484257 1739 log.go:172] (0x40007526e0) (0x400081dc20) Stream removed, broadcasting: 3\nI0816 20:49:03.484441 1739 log.go:172] (0x40007526e0) (0x400081dcc0) Stream removed, broadcasting: 5\n" Aug 16 20:49:03.497: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7170.svc.cluster.local\tcanonical name = externalsvc.services-7170.svc.cluster.local.\nName:\texternalsvc.services-7170.svc.cluster.local\nAddress: 10.100.78.7\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7170, will wait for the garbage collector to delete the pods Aug 16 20:49:03.562: INFO: Deleting ReplicationController externalsvc took: 8.340342ms Aug 16 20:49:03.863: INFO: Terminating ReplicationController externalsvc pods took: 300.634353ms Aug 16 20:49:12.408: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:49:12.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7170" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.315 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":97,"skipped":1521,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:49:12.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d30d93f6-80dc-47ad-9f42-ac6d8af475d3 STEP: Creating a pod to test consume configMaps Aug 16 20:49:13.607: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cada43eb-fd11-4b4f-99ef-18021682ea90" in namespace "projected-6495" to be "success or failure" Aug 16 20:49:13.643: INFO: Pod "pod-projected-configmaps-cada43eb-fd11-4b4f-99ef-18021682ea90": Phase="Pending", Reason="", readiness=false. Elapsed: 35.347233ms Aug 16 20:49:15.648: INFO: Pod "pod-projected-configmaps-cada43eb-fd11-4b4f-99ef-18021682ea90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040988803s Aug 16 20:49:17.677: INFO: Pod "pod-projected-configmaps-cada43eb-fd11-4b4f-99ef-18021682ea90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069718133s STEP: Saw pod success Aug 16 20:49:17.677: INFO: Pod "pod-projected-configmaps-cada43eb-fd11-4b4f-99ef-18021682ea90" satisfied condition "success or failure" Aug 16 20:49:17.681: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-cada43eb-fd11-4b4f-99ef-18021682ea90 container projected-configmap-volume-test: STEP: delete the pod Aug 16 20:49:17.726: INFO: Waiting for pod pod-projected-configmaps-cada43eb-fd11-4b4f-99ef-18021682ea90 to disappear Aug 16 20:49:17.740: INFO: Pod pod-projected-configmaps-cada43eb-fd11-4b4f-99ef-18021682ea90 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:49:17.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6495" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1526,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:49:17.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a1691a95-6ee0-47fd-9f19-02307d1a732d STEP: Creating a pod to test consume secrets Aug 16 20:49:18.242: INFO: Waiting up to 5m0s for pod "pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17" in namespace "secrets-1418" to be "success or failure" Aug 16 20:49:18.308: INFO: Pod "pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17": Phase="Pending", Reason="", readiness=false. Elapsed: 65.741364ms Aug 16 20:49:20.314: INFO: Pod "pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071761639s Aug 16 20:49:23.987: INFO: Pod "pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17": Phase="Running", Reason="", readiness=true. Elapsed: 5.744692422s Aug 16 20:49:25.994: INFO: Pod "pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.751488156s STEP: Saw pod success Aug 16 20:49:25.994: INFO: Pod "pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17" satisfied condition "success or failure" Aug 16 20:49:25.998: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17 container secret-env-test: STEP: delete the pod Aug 16 20:49:26.073: INFO: Waiting for pod pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17 to disappear Aug 16 20:49:26.082: INFO: Pod pod-secrets-1246e10c-b0a9-4306-b751-15e9e7478e17 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:49:26.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1418" for this suite. • [SLOW TEST:8.342 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1526,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:49:26.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:50:01.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2391" for this suite. • [SLOW TEST:35.236 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1526,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:50:01.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:50:01.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a" in namespace "projected-736" to be "success or failure" Aug 16 20:50:01.491: INFO: Pod "downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.471166ms Aug 16 20:50:03.497: INFO: Pod "downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014572746s Aug 16 20:50:05.504: INFO: Pod "downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a": Phase="Running", Reason="", readiness=true. Elapsed: 4.022023281s Aug 16 20:50:07.511: INFO: Pod "downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029337032s STEP: Saw pod success Aug 16 20:50:07.512: INFO: Pod "downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a" satisfied condition "success or failure" Aug 16 20:50:07.517: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a container client-container: STEP: delete the pod Aug 16 20:50:07.601: INFO: Waiting for pod downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a to disappear Aug 16 20:50:07.609: INFO: Pod downwardapi-volume-0fddc7f3-df7f-4f4a-8aa6-7214a10fe70a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:50:07.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-736" for this suite. • [SLOW TEST:6.288 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1529,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:50:07.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-4db490ce-e0f4-4aec-bd26-324ee975a239 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4db490ce-e0f4-4aec-bd26-324ee975a239 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:50:13.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5951" for this suite. • [SLOW TEST:6.268 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1551,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:50:13.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-5f103b55-2743-438d-80e6-e3125b42000d STEP: Creating a pod to test consume configMaps Aug 16 20:50:14.109: INFO: Waiting up to 5m0s for pod "pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56" in namespace "configmap-2263" to be "success or failure" Aug 16 20:50:14.181: INFO: Pod "pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56": Phase="Pending", Reason="", readiness=false. Elapsed: 71.905978ms Aug 16 20:50:16.306: INFO: Pod "pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19715233s Aug 16 20:50:18.370: INFO: Pod "pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56": Phase="Running", Reason="", readiness=true. Elapsed: 4.260550621s Aug 16 20:50:20.377: INFO: Pod "pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.267661393s STEP: Saw pod success Aug 16 20:50:20.377: INFO: Pod "pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56" satisfied condition "success or failure" Aug 16 20:50:20.382: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56 container configmap-volume-test: STEP: delete the pod Aug 16 20:50:20.424: INFO: Waiting for pod pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56 to disappear Aug 16 20:50:20.431: INFO: Pod pod-configmaps-84b5ec12-aee3-445a-92d7-a1bfa0e53b56 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:50:20.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2263" for this suite. • [SLOW TEST:6.551 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1552,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:50:20.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:50:20.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2" in namespace "downward-api-8603" to be "success or failure" Aug 16 20:50:20.575: INFO: Pod "downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.630521ms Aug 16 20:50:22.636: INFO: Pod "downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065640752s Aug 16 20:50:24.643: INFO: Pod "downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072604965s Aug 16 20:50:26.649: INFO: Pod "downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078723053s STEP: Saw pod success Aug 16 20:50:26.649: INFO: Pod "downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2" satisfied condition "success or failure" Aug 16 20:50:26.654: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2 container client-container: STEP: delete the pod Aug 16 20:50:26.702: INFO: Waiting for pod downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2 to disappear Aug 16 20:50:26.712: INFO: Pod downwardapi-volume-1e255b4d-ae0f-4b6d-92ef-20d12a69a5c2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:50:26.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8603" for this suite. • [SLOW TEST:6.280 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1558,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:50:26.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-79801694-53a9-4cbb-abe3-aa968f9c9784 STEP: Creating a pod to test consume configMaps Aug 16 20:50:26.836: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f" in namespace "projected-3715" to be "success or failure" Aug 16 20:50:26.867: INFO: Pod "pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.772205ms Aug 16 20:50:29.007: INFO: Pod "pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171164158s Aug 16 20:50:31.015: INFO: Pod "pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17910775s Aug 16 20:50:33.021: INFO: Pod "pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.185268939s STEP: Saw pod success Aug 16 20:50:33.021: INFO: Pod "pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f" satisfied condition "success or failure" Aug 16 20:50:33.026: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f container projected-configmap-volume-test: STEP: delete the pod Aug 16 20:50:33.059: INFO: Waiting for pod pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f to disappear Aug 16 20:50:33.070: INFO: Pod pod-projected-configmaps-53b6b431-f4f1-41e4-b925-bb8c46c6913f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:50:33.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3715" for this suite. • [SLOW TEST:6.353 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1570,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:50:33.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0816 20:51:03.970251 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 16 20:51:03.970: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:51:03.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-302" for this suite. • [SLOW TEST:30.895 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":106,"skipped":1637,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:51:03.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Aug 16 20:51:04.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Aug 16 20:51:05.575: INFO: stderr: "" Aug 16 20:51:05.575: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:51:05.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3675" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":107,"skipped":1637,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:51:05.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 16 20:51:14.481: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 16 20:51:14.485: INFO: Pod pod-with-prestop-http-hook still exists Aug 16 20:51:16.485: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 16 20:51:16.490: INFO: Pod pod-with-prestop-http-hook still exists Aug 16 20:51:18.485: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 16 20:51:18.493: INFO: Pod pod-with-prestop-http-hook still exists Aug 16 20:51:20.485: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 16 20:51:20.589: INFO: Pod pod-with-prestop-http-hook still exists Aug 16 20:51:22.485: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 16 20:51:22.595: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:51:22.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8975" for this suite. • [SLOW TEST:17.013 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1645,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:51:22.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:51:23.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9" in namespace "downward-api-854" to be "success or failure" Aug 16 20:51:23.265: INFO: Pod "downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.673084ms Aug 16 20:51:25.273: INFO: Pod "downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032510007s Aug 16 20:51:27.279: INFO: Pod "downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038428756s Aug 16 20:51:29.654: INFO: Pod "downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.414227359s STEP: Saw pod success Aug 16 20:51:29.655: INFO: Pod "downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9" satisfied condition "success or failure" Aug 16 20:51:29.877: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9 container client-container: STEP: delete the pod Aug 16 20:51:29.924: INFO: Waiting for pod downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9 to disappear Aug 16 20:51:29.995: INFO: Pod downwardapi-volume-704385f8-d28c-4a37-b6b1-9aef4bdbfae9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:51:29.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-854" for this suite. • [SLOW TEST:7.388 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1652,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:51:30.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 20:51:30.134: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 16 20:51:30.169: INFO: Number of nodes with available pods: 0 Aug 16 20:51:30.169: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 16 20:51:30.324: INFO: Number of nodes with available pods: 0 Aug 16 20:51:30.324: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:31.337: INFO: Number of nodes with available pods: 0 Aug 16 20:51:31.337: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:32.331: INFO: Number of nodes with available pods: 0 Aug 16 20:51:32.332: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:33.427: INFO: Number of nodes with available pods: 0 Aug 16 20:51:33.427: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:34.330: INFO: Number of nodes with available pods: 0 Aug 16 20:51:34.331: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:35.329: INFO: Number of nodes with available pods: 1 Aug 16 20:51:35.329: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 16 20:51:35.414: INFO: Number of nodes with available pods: 1 Aug 16 20:51:35.414: INFO: Number of running nodes: 0, number of available pods: 1 Aug 16 20:51:36.419: INFO: Number of nodes with available pods: 0 Aug 16 20:51:36.419: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 16 20:51:36.768: INFO: Number of nodes with available pods: 0 Aug 16 20:51:36.768: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:37.954: INFO: Number of nodes with available pods: 0 Aug 16 20:51:37.954: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:38.773: INFO: Number of nodes with available pods: 0 Aug 16 20:51:38.773: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:39.773: INFO: Number of nodes with available pods: 0 Aug 16 20:51:39.773: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:40.788: INFO: Number of nodes with available pods: 0 Aug 16 20:51:40.788: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:41.839: INFO: Number of nodes with available pods: 0 Aug 16 20:51:41.839: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:42.774: INFO: Number of nodes with available pods: 0 Aug 16 20:51:42.774: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:43.774: INFO: Number of nodes with available pods: 0 Aug 16 20:51:43.774: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:45.098: INFO: Number of nodes with available pods: 0 Aug 16 20:51:45.098: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:45.774: INFO: Number of nodes with available pods: 0 Aug 16 20:51:45.774: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:46.774: INFO: Number of nodes with available pods: 0 Aug 16 20:51:46.774: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:47.773: INFO: Number of nodes with available pods: 0 Aug 16 20:51:47.773: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:48.773: INFO: Number of nodes with available pods: 0 Aug 16 20:51:48.773: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:49.775: INFO: Number of nodes with available pods: 0 Aug 16 20:51:49.775: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:50.816: INFO: Number of nodes with available pods: 0 Aug 16 20:51:50.816: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:51.788: INFO: Number of nodes with available pods: 0 Aug 16 20:51:51.788: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:52.773: INFO: Number of nodes with available pods: 0 Aug 16 20:51:52.773: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:53.775: INFO: Number of nodes with available pods: 0 Aug 16 20:51:53.776: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:54.943: INFO: Number of nodes with available pods: 0 Aug 16 20:51:54.943: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:55.919: INFO: Number of nodes with available pods: 0 Aug 16 20:51:55.919: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:57.032: INFO: Number of nodes with available pods: 0 Aug 16 20:51:57.032: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:57.775: INFO: Number of nodes with available pods: 0 Aug 16 20:51:57.775: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 20:51:58.774: INFO: Number of nodes with available pods: 1 Aug 16 20:51:58.774: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6549, will wait for the garbage collector to delete the pods Aug 16 20:51:58.848: INFO: Deleting DaemonSet.extensions daemon-set took: 8.946978ms Aug 16 20:51:59.149: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.042208ms Aug 16 20:52:12.654: INFO: Number of nodes with available pods: 0 Aug 16 20:52:12.654: INFO: Number of running nodes: 0, number of available pods: 0 Aug 16 20:52:12.681: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6549/daemonsets","resourceVersion":"497503"},"items":null} Aug 16 20:52:12.735: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6549/pods","resourceVersion":"497504"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:52:12.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6549" for this suite. • [SLOW TEST:42.929 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":110,"skipped":1674,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:52:12.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:52:13.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3152" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":111,"skipped":1675,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:52:13.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e54ebb53-a797-46d3-b82d-fb95fd123c62 STEP: Creating a pod to test consume secrets Aug 16 20:52:13.281: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a" in namespace "projected-3495" to be "success or failure" Aug 16 20:52:13.290: INFO: Pod "pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612581ms Aug 16 20:52:15.297: INFO: Pod "pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015595172s Aug 16 20:52:17.410: INFO: Pod "pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a": Phase="Running", Reason="", readiness=true. Elapsed: 4.128172211s Aug 16 20:52:19.416: INFO: Pod "pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134789787s STEP: Saw pod success Aug 16 20:52:19.417: INFO: Pod "pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a" satisfied condition "success or failure" Aug 16 20:52:19.421: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a container projected-secret-volume-test: STEP: delete the pod Aug 16 20:52:19.477: INFO: Waiting for pod pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a to disappear Aug 16 20:52:19.546: INFO: Pod pod-projected-secrets-f1f1e9f7-556e-47eb-8f59-13a71e0c507a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:52:19.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3495" for this suite. • [SLOW TEST:6.449 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1714,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:52:19.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:52:19.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250" in namespace "projected-2205" to be "success or failure" Aug 16 20:52:19.870: INFO: Pod "downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250": Phase="Pending", Reason="", readiness=false. Elapsed: 57.025709ms Aug 16 20:52:21.876: INFO: Pod "downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063002844s Aug 16 20:52:23.883: INFO: Pod "downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070511739s Aug 16 20:52:28.347: INFO: Pod "downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250": Phase="Running", Reason="", readiness=true. Elapsed: 8.534341365s Aug 16 20:52:30.353: INFO: Pod "downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.540559088s STEP: Saw pod success Aug 16 20:52:30.354: INFO: Pod "downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250" satisfied condition "success or failure" Aug 16 20:52:30.357: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250 container client-container: STEP: delete the pod Aug 16 20:52:30.447: INFO: Waiting for pod downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250 to disappear Aug 16 20:52:30.683: INFO: Pod downwardapi-volume-f48a8b58-b760-477e-8315-52ff6d7da250 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:52:30.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2205" for this suite. • [SLOW TEST:11.130 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1763,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:52:30.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 16 20:52:30.782: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 16 20:52:30.939: INFO: Waiting for terminating namespaces to be deleted... Aug 16 20:52:31.025: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 16 20:52:31.049: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 16 20:52:31.049: INFO: Container kindnet-cni ready: true, restart count 0 Aug 16 20:52:31.049: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 16 20:52:31.049: INFO: Container kube-proxy ready: true, restart count 0 Aug 16 20:52:31.049: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 16 20:52:31.058: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 16 20:52:31.058: INFO: Container kindnet-cni ready: true, restart count 0 Aug 16 20:52:31.058: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 16 20:52:31.058: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162bdae09d88b4cf], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:52:33.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4852" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":114,"skipped":1778,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:52:33.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-8081 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8081 to expose endpoints map[] Aug 16 20:52:35.446: INFO: successfully validated that service endpoint-test2 in namespace services-8081 exposes endpoints map[] (274.662127ms elapsed) STEP: Creating pod pod1 in namespace services-8081 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8081 to expose endpoints map[pod1:[80]] Aug 16 20:52:40.871: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.957851858s elapsed, will retry) Aug 16 20:52:41.880: INFO: successfully validated that service endpoint-test2 in namespace services-8081 exposes endpoints map[pod1:[80]] (5.966618797s elapsed) STEP: Creating pod pod2 in namespace services-8081 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8081 to expose endpoints map[pod1:[80] pod2:[80]] Aug 16 20:52:46.762: INFO: Unexpected endpoints: found map[2ec1ddbc-5fcc-4c5d-9390-eebcfff9aa21:[80]], expected map[pod1:[80] pod2:[80]] (4.877126463s elapsed, will retry) Aug 16 20:52:50.252: INFO: successfully validated that service endpoint-test2 in namespace services-8081 exposes endpoints map[pod1:[80] pod2:[80]] (8.366873221s elapsed) STEP: Deleting pod pod1 in namespace services-8081 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8081 to expose endpoints map[pod2:[80]] Aug 16 20:52:50.382: INFO: successfully validated that service endpoint-test2 in namespace services-8081 exposes endpoints map[pod2:[80]] (125.203349ms elapsed) STEP: Deleting pod pod2 in namespace services-8081 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8081 to expose endpoints map[] Aug 16 20:52:50.655: INFO: successfully validated that service endpoint-test2 in namespace services-8081 exposes endpoints map[] (267.704974ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:52:51.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8081" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.414 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":115,"skipped":1813,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:52:51.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5863 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-5863 Aug 16 20:52:52.841: INFO: Found 0 stateful pods, waiting for 1 Aug 16 20:53:02.847: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 16 20:53:02.877: INFO: Deleting all statefulset in ns statefulset-5863 Aug 16 20:53:02.895: INFO: Scaling statefulset ss to 0 Aug 16 20:53:22.951: INFO: Waiting for statefulset status.replicas updated to 0 Aug 16 20:53:22.954: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:53:22.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5863" for this suite. • [SLOW TEST:31.200 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":116,"skipped":1827,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:53:22.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-33858d8b-632f-4044-a168-e120cc1967e6 STEP: Creating a pod to test consume secrets Aug 16 20:53:23.130: INFO: Waiting up to 5m0s for pod "pod-secrets-1cab030c-19d1-4754-8390-c704fe4d753d" in namespace "secrets-259" to be "success or failure" Aug 16 20:53:23.147: INFO: Pod "pod-secrets-1cab030c-19d1-4754-8390-c704fe4d753d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.405574ms Aug 16 20:53:25.152: INFO: Pod "pod-secrets-1cab030c-19d1-4754-8390-c704fe4d753d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021170759s Aug 16 20:53:27.155: INFO: Pod "pod-secrets-1cab030c-19d1-4754-8390-c704fe4d753d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024930511s STEP: Saw pod success Aug 16 20:53:27.156: INFO: Pod "pod-secrets-1cab030c-19d1-4754-8390-c704fe4d753d" satisfied condition "success or failure" Aug 16 20:53:27.158: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-1cab030c-19d1-4754-8390-c704fe4d753d container secret-volume-test: STEP: delete the pod Aug 16 20:53:27.178: INFO: Waiting for pod pod-secrets-1cab030c-19d1-4754-8390-c704fe4d753d to disappear Aug 16 20:53:27.215: INFO: Pod pod-secrets-1cab030c-19d1-4754-8390-c704fe4d753d no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:53:27.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-259" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1827,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:53:27.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 20:53:27.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 16 20:53:27.993: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-16T20:53:27Z generation:1 name:name1 resourceVersion:497942 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6f614dab-ff51-4e90-b87b-169929870091] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 16 20:53:38.153: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-16T20:53:37Z generation:1 name:name2 resourceVersion:498005 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:de397f9b-f4bc-44e4-a30a-064ccb7ab0e1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 16 20:53:48.163: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-16T20:53:27Z generation:2 name:name1 resourceVersion:498035 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6f614dab-ff51-4e90-b87b-169929870091] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 16 20:53:58.170: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-16T20:53:37Z generation:2 name:name2 resourceVersion:498061 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:de397f9b-f4bc-44e4-a30a-064ccb7ab0e1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 16 20:54:08.181: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-16T20:53:27Z generation:2 name:name1 resourceVersion:498091 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6f614dab-ff51-4e90-b87b-169929870091] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 16 20:54:18.192: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-16T20:53:37Z generation:2 name:name2 resourceVersion:498121 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:de397f9b-f4bc-44e4-a30a-064ccb7ab0e1] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:54:28.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1899" for this suite. • [SLOW TEST:61.590 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":118,"skipped":1840,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:54:28.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50 [It] should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Aug 16 20:54:35.007: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Aug 16 20:54:46.516: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:54:46.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-866" for this suite. • [SLOW TEST:17.708 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":119,"skipped":1854,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:54:46.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:55:04.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9083" for this suite. • [SLOW TEST:17.601 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":120,"skipped":1858,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:55:04.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 20:55:04.310: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 16 20:55:09.316: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 16 20:55:09.318: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 16 20:55:15.772: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4968 /apis/apps/v1/namespaces/deployment-4968/deployments/test-cleanup-deployment c7de3cb6-7b16-42df-9594-8af87f172349 498375 1 2020-08-16 20:55:09 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002c5f9d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-16 20:55:10 +0000 UTC,LastTransitionTime:2020-08-16 20:55:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-08-16 20:55:14 +0000 UTC,LastTransitionTime:2020-08-16 20:55:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 16 20:55:15.889: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-4968 /apis/apps/v1/namespaces/deployment-4968/replicasets/test-cleanup-deployment-55ffc6b7b6 b2583211-5e3a-4823-8e85-ebf721448a1a 498364 1 2020-08-16 20:55:09 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c7de3cb6-7b16-42df-9594-8af87f172349 0x40028a9dc7 0x40028a9dc8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40028a9e38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 16 20:55:16.810: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-wkn2n" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-wkn2n test-cleanup-deployment-55ffc6b7b6- deployment-4968 /api/v1/namespaces/deployment-4968/pods/test-cleanup-deployment-55ffc6b7b6-wkn2n d0484e8c-4995-4ed3-a805-84eb9e033f50 498363 0 2020-08-16 20:55:09 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 b2583211-5e3a-4823-8e85-ebf721448a1a 0x4002f345f7 0x4002f345f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hxb9t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hxb9t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hxb9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 20:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 20:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 20:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 20:55:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.221,StartTime:2020-08-16 20:55:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 20:55:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://adfc367948f328b7c29d8aae422e289d21ebed0bb3c91a70c7c551ba09e7a8c0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:55:16.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4968" for this suite. • [SLOW TEST:12.689 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":121,"skipped":1875,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:55:16.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:55:17.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13" in namespace "downward-api-8839" to be "success or failure" Aug 16 20:55:17.342: INFO: Pod "downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13": Phase="Pending", Reason="", readiness=false. Elapsed: 23.26912ms Aug 16 20:55:19.348: INFO: Pod "downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029139608s Aug 16 20:55:21.938: INFO: Pod "downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.619096556s Aug 16 20:55:23.992: INFO: Pod "downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.67320491s STEP: Saw pod success Aug 16 20:55:23.993: INFO: Pod "downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13" satisfied condition "success or failure" Aug 16 20:55:24.019: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13 container client-container: STEP: delete the pod Aug 16 20:55:24.078: INFO: Waiting for pod downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13 to disappear Aug 16 20:55:24.170: INFO: Pod downwardapi-volume-1d3f4cd8-d705-4a60-811f-3f634d708d13 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:55:24.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8839" for this suite. • [SLOW TEST:7.384 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1922,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:55:24.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-9521a468-98a0-41e5-ba5d-5ed5e6f4fccc STEP: Creating a pod to test consume secrets Aug 16 20:55:25.051: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39" in namespace "projected-6580" to be "success or failure" Aug 16 20:55:25.458: INFO: Pod "pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39": Phase="Pending", Reason="", readiness=false. Elapsed: 407.00657ms Aug 16 20:55:27.463: INFO: Pod "pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411449927s Aug 16 20:55:29.579: INFO: Pod "pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528153016s Aug 16 20:55:31.585: INFO: Pod "pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.533368598s STEP: Saw pod success Aug 16 20:55:31.585: INFO: Pod "pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39" satisfied condition "success or failure" Aug 16 20:55:31.590: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39 container projected-secret-volume-test: STEP: delete the pod Aug 16 20:55:31.741: INFO: Waiting for pod pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39 to disappear Aug 16 20:55:31.808: INFO: Pod pod-projected-secrets-6299afba-d0ef-47e1-b765-f0a5ef739d39 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:55:31.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6580" for this suite. • [SLOW TEST:7.619 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1922,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:55:31.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-f79ca6b9-a1d6-4be7-ba57-2ca5c6a75911 STEP: Creating a pod to test consume secrets Aug 16 20:55:32.070: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce" in namespace "projected-6902" to be "success or failure" Aug 16 20:55:32.104: INFO: Pod "pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce": Phase="Pending", Reason="", readiness=false. Elapsed: 33.483303ms Aug 16 20:55:34.110: INFO: Pod "pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039176776s Aug 16 20:55:36.116: INFO: Pod "pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045664636s Aug 16 20:55:38.285: INFO: Pod "pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.214938255s STEP: Saw pod success Aug 16 20:55:38.286: INFO: Pod "pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce" satisfied condition "success or failure" Aug 16 20:55:38.666: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce container projected-secret-volume-test: STEP: delete the pod Aug 16 20:55:39.223: INFO: Waiting for pod pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce to disappear Aug 16 20:55:39.246: INFO: Pod pod-projected-secrets-b76a7c45-de3b-4540-a643-8a5642da79ce no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:55:39.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6902" for this suite. • [SLOW TEST:7.626 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1937,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:55:39.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 16 20:55:46.286: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:55:46.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5303" for this suite. • [SLOW TEST:6.857 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1944,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:55:46.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:55:53.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6076" for this suite. • [SLOW TEST:7.369 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":126,"skipped":2018,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:55:53.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:55:57.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-149" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2037,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:55:57.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 20:55:57.941: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 16 20:56:16.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4892 create -f -' Aug 16 20:56:23.841: INFO: stderr: "" Aug 16 20:56:23.842: INFO: stdout: "e2e-test-crd-publish-openapi-6727-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 16 20:56:23.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4892 delete e2e-test-crd-publish-openapi-6727-crds test-cr' Aug 16 20:56:25.099: INFO: stderr: "" Aug 16 20:56:25.100: INFO: stdout: "e2e-test-crd-publish-openapi-6727-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 16 20:56:25.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4892 apply -f -' Aug 16 20:56:26.654: INFO: stderr: "" Aug 16 20:56:26.654: INFO: stdout: "e2e-test-crd-publish-openapi-6727-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 16 20:56:26.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4892 delete e2e-test-crd-publish-openapi-6727-crds test-cr' Aug 16 20:56:27.893: INFO: stderr: "" Aug 16 20:56:27.893: INFO: stdout: "e2e-test-crd-publish-openapi-6727-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 16 20:56:27.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6727-crds' Aug 16 20:56:29.426: INFO: stderr: "" Aug 16 20:56:29.426: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6727-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:56:48.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4892" for this suite. • [SLOW TEST:50.350 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":128,"skipped":2066,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:56:48.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1107/configmap-test-9e395f63-ee91-4e1c-94d9-0eb757c2e89c STEP: Creating a pod to test consume configMaps Aug 16 20:56:48.808: INFO: Waiting up to 5m0s for pod "pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c" in namespace "configmap-1107" to be "success or failure" Aug 16 20:56:48.848: INFO: Pod "pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.145724ms Aug 16 20:56:51.087: INFO: Pod "pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279619787s Aug 16 20:56:53.092: INFO: Pod "pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284229082s Aug 16 20:56:55.098: INFO: Pod "pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.290500132s STEP: Saw pod success Aug 16 20:56:55.099: INFO: Pod "pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c" satisfied condition "success or failure" Aug 16 20:56:55.103: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c container env-test: STEP: delete the pod Aug 16 20:56:55.136: INFO: Waiting for pod pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c to disappear Aug 16 20:56:55.220: INFO: Pod pod-configmaps-37ececef-a798-4a50-97be-c22677adfa0c no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:56:55.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1107" for this suite. • [SLOW TEST:7.025 seconds] [sig-node] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2072,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:56:55.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Aug 16 20:56:55.376: INFO: Waiting up to 5m0s for pod "downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8" in namespace "downward-api-5286" to be "success or failure" Aug 16 20:56:55.394: INFO: Pod "downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.111777ms Aug 16 20:56:57.401: INFO: Pod "downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024413117s Aug 16 20:56:59.409: INFO: Pod "downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03264845s Aug 16 20:57:01.415: INFO: Pod "downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038324034s STEP: Saw pod success Aug 16 20:57:01.415: INFO: Pod "downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8" satisfied condition "success or failure" Aug 16 20:57:01.419: INFO: Trying to get logs from node jerma-worker pod downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8 container dapi-container: STEP: delete the pod Aug 16 20:57:01.473: INFO: Waiting for pod downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8 to disappear Aug 16 20:57:01.504: INFO: Pod downward-api-b3234e2b-604b-4c69-b9e3-39bb88cf86a8 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:57:01.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5286" for this suite. • [SLOW TEST:6.276 seconds] [sig-node] Downward API /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2099,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:57:01.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Aug 16 20:57:06.249: INFO: Successfully updated pod "annotationupdatee8ef4028-34bd-4ca4-a9e5-d2e8432df1af" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:57:08.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2174" for this suite. • [SLOW TEST:7.077 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2103,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:57:08.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 16 20:57:08.824: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 16 20:57:13.880: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:57:13.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7813" for this suite. • [SLOW TEST:5.459 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":132,"skipped":2111,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:57:14.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 20:57:17.214: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 20:57:19.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208237, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208237, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208237, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208237, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 20:57:22.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:57:23.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-441" for this suite. STEP: Destroying namespace "webhook-441-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.400 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":133,"skipped":2117,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:57:26.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:57:27.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7975" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":134,"skipped":2133,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:57:27.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-2283 STEP: creating replication controller nodeport-test in namespace services-2283 I0816 20:57:29.309975 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-2283, replica count: 2 I0816 20:57:32.361797 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:57:35.362626 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 20:57:38.363273 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 16 20:57:38.363: INFO: Creating new exec pod Aug 16 20:57:45.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2283 execpodtb9gx -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 16 20:57:47.161: INFO: stderr: "I0816 20:57:46.924041 1943 log.go:172] (0x4000acaa50) (0x40006e60a0) Create stream\nI0816 20:57:46.927159 1943 log.go:172] (0x4000acaa50) (0x40006e60a0) Stream added, broadcasting: 1\nI0816 20:57:46.937134 1943 log.go:172] (0x4000acaa50) Reply frame received for 1\nI0816 20:57:46.937742 1943 log.go:172] (0x4000acaa50) (0x4000788000) Create stream\nI0816 20:57:46.937815 1943 log.go:172] (0x4000acaa50) (0x4000788000) Stream added, broadcasting: 3\nI0816 20:57:46.939045 1943 log.go:172] (0x4000acaa50) Reply frame received for 3\nI0816 20:57:46.939388 1943 log.go:172] (0x4000acaa50) (0x40007880a0) Create stream\nI0816 20:57:46.939483 1943 log.go:172] (0x4000acaa50) (0x40007880a0) Stream added, broadcasting: 5\nI0816 20:57:46.940720 1943 log.go:172] (0x4000acaa50) Reply frame received for 5\nI0816 20:57:46.989991 1943 log.go:172] (0x4000acaa50) Data frame received for 5\nI0816 20:57:46.990310 1943 log.go:172] (0x40007880a0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI0816 20:57:46.991458 1943 log.go:172] (0x40007880a0) (5) Data frame sent\nI0816 20:57:47.140442 1943 log.go:172] (0x4000acaa50) Data frame received for 5\nI0816 20:57:47.140569 1943 log.go:172] (0x40007880a0) (5) Data frame handling\nI0816 20:57:47.140689 1943 log.go:172] (0x40007880a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0816 20:57:47.141022 1943 log.go:172] (0x4000acaa50) Data frame received for 3\nI0816 20:57:47.141138 1943 log.go:172] (0x4000788000) (3) Data frame handling\nI0816 20:57:47.144952 1943 log.go:172] (0x4000acaa50) Data frame received for 5\nI0816 20:57:47.145008 1943 log.go:172] (0x40007880a0) (5) Data frame handling\nI0816 20:57:47.145409 1943 log.go:172] (0x4000acaa50) Data frame received for 1\nI0816 20:57:47.145521 1943 log.go:172] (0x40006e60a0) (1) Data frame handling\nI0816 20:57:47.145631 1943 log.go:172] (0x40006e60a0) (1) Data frame sent\nI0816 20:57:47.147391 1943 log.go:172] (0x4000acaa50) (0x40006e60a0) Stream removed, broadcasting: 1\nI0816 20:57:47.149032 1943 log.go:172] (0x4000acaa50) Go away received\nI0816 20:57:47.151236 1943 log.go:172] (0x4000acaa50) (0x40006e60a0) Stream removed, broadcasting: 1\nI0816 20:57:47.151553 1943 log.go:172] (0x4000acaa50) (0x4000788000) Stream removed, broadcasting: 3\nI0816 20:57:47.151750 1943 log.go:172] (0x4000acaa50) (0x40007880a0) Stream removed, broadcasting: 5\n" Aug 16 20:57:47.162: INFO: stdout: "" Aug 16 20:57:47.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2283 execpodtb9gx -- /bin/sh -x -c nc -zv -t -w 2 10.99.140.174 80' Aug 16 20:57:48.946: INFO: stderr: "I0816 20:57:48.849005 1966 log.go:172] (0x40004bb130) (0x4000a42000) Create stream\nI0816 20:57:48.853162 1966 log.go:172] (0x40004bb130) (0x4000a42000) Stream added, broadcasting: 1\nI0816 20:57:48.863493 1966 log.go:172] (0x40004bb130) Reply frame received for 1\nI0816 20:57:48.864028 1966 log.go:172] (0x40004bb130) (0x400090e000) Create stream\nI0816 20:57:48.864084 1966 log.go:172] (0x40004bb130) (0x400090e000) Stream added, broadcasting: 3\nI0816 20:57:48.865262 1966 log.go:172] (0x40004bb130) Reply frame received for 3\nI0816 20:57:48.865494 1966 log.go:172] (0x40004bb130) (0x4000a420a0) Create stream\nI0816 20:57:48.865546 1966 log.go:172] (0x40004bb130) (0x4000a420a0) Stream added, broadcasting: 5\nI0816 20:57:48.866614 1966 log.go:172] (0x40004bb130) Reply frame received for 5\nI0816 20:57:48.932067 1966 log.go:172] (0x40004bb130) Data frame received for 3\nI0816 20:57:48.932348 1966 log.go:172] (0x400090e000) (3) Data frame handling\nI0816 20:57:48.932520 1966 log.go:172] (0x40004bb130) Data frame received for 5\nI0816 20:57:48.932648 1966 log.go:172] (0x4000a420a0) (5) Data frame handling\nI0816 20:57:48.932852 1966 log.go:172] (0x40004bb130) Data frame received for 1\nI0816 20:57:48.932960 1966 log.go:172] (0x4000a42000) (1) Data frame handling\n+ nc -zv -t -w 2 10.99.140.174 80\nConnection to 10.99.140.174 80 port [tcp/http] succeeded!\nI0816 20:57:48.934620 1966 log.go:172] (0x4000a420a0) (5) Data frame sent\nI0816 20:57:48.934914 1966 log.go:172] (0x4000a42000) (1) Data frame sent\nI0816 20:57:48.935190 1966 log.go:172] (0x40004bb130) Data frame received for 5\nI0816 20:57:48.935265 1966 log.go:172] (0x4000a420a0) (5) Data frame handling\nI0816 20:57:48.936455 1966 log.go:172] (0x40004bb130) (0x4000a42000) Stream removed, broadcasting: 1\nI0816 20:57:48.937559 1966 log.go:172] (0x40004bb130) Go away received\nI0816 20:57:48.940005 1966 log.go:172] (0x40004bb130) (0x4000a42000) Stream removed, broadcasting: 1\nI0816 20:57:48.940337 1966 log.go:172] (0x40004bb130) (0x400090e000) Stream removed, broadcasting: 3\nI0816 20:57:48.940464 1966 log.go:172] (0x40004bb130) (0x4000a420a0) Stream removed, broadcasting: 5\n" Aug 16 20:57:48.947: INFO: stdout: "" Aug 16 20:57:48.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2283 execpodtb9gx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31960' Aug 16 20:57:50.635: INFO: stderr: "I0816 20:57:50.517060 1989 log.go:172] (0x4000aa8a50) (0x400099c000) Create stream\nI0816 20:57:50.522425 1989 log.go:172] (0x4000aa8a50) (0x400099c000) Stream added, broadcasting: 1\nI0816 20:57:50.536705 1989 log.go:172] (0x4000aa8a50) Reply frame received for 1\nI0816 20:57:50.537607 1989 log.go:172] (0x4000aa8a50) (0x400080dae0) Create stream\nI0816 20:57:50.537689 1989 log.go:172] (0x4000aa8a50) (0x400080dae0) Stream added, broadcasting: 3\nI0816 20:57:50.539463 1989 log.go:172] (0x4000aa8a50) Reply frame received for 3\nI0816 20:57:50.539776 1989 log.go:172] (0x4000aa8a50) (0x4000bf80a0) Create stream\nI0816 20:57:50.539844 1989 log.go:172] (0x4000aa8a50) (0x4000bf80a0) Stream added, broadcasting: 5\nI0816 20:57:50.541307 1989 log.go:172] (0x4000aa8a50) Reply frame received for 5\nI0816 20:57:50.601164 1989 log.go:172] (0x4000aa8a50) Data frame received for 5\nI0816 20:57:50.601347 1989 log.go:172] (0x4000bf80a0) (5) Data frame handling\nI0816 20:57:50.601699 1989 log.go:172] (0x4000bf80a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 31960\nI0816 20:57:50.612459 1989 log.go:172] (0x4000aa8a50) Data frame received for 5\nI0816 20:57:50.612564 1989 log.go:172] (0x4000bf80a0) (5) Data frame handling\nI0816 20:57:50.612705 1989 log.go:172] (0x4000bf80a0) (5) Data frame sent\nI0816 20:57:50.612930 1989 log.go:172] (0x4000aa8a50) Data frame received for 5\nI0816 20:57:50.613036 1989 log.go:172] (0x4000bf80a0) (5) Data frame handling\nConnection to 172.18.0.6 31960 port [tcp/31960] succeeded!\nI0816 20:57:50.613948 1989 log.go:172] (0x4000aa8a50) Data frame received for 3\nI0816 20:57:50.614136 1989 log.go:172] (0x400080dae0) (3) Data frame handling\nI0816 20:57:50.614903 1989 log.go:172] (0x4000aa8a50) Data frame received for 1\nI0816 20:57:50.615010 1989 log.go:172] (0x400099c000) (1) Data frame handling\nI0816 20:57:50.615160 1989 log.go:172] (0x400099c000) (1) Data frame sent\nI0816 20:57:50.619807 1989 log.go:172] (0x4000aa8a50) (0x400099c000) Stream removed, broadcasting: 1\nI0816 20:57:50.620677 1989 log.go:172] (0x4000aa8a50) Go away received\nI0816 20:57:50.624351 1989 log.go:172] (0x4000aa8a50) (0x400099c000) Stream removed, broadcasting: 1\nI0816 20:57:50.624860 1989 log.go:172] (0x4000aa8a50) (0x400080dae0) Stream removed, broadcasting: 3\nI0816 20:57:50.625128 1989 log.go:172] (0x4000aa8a50) (0x4000bf80a0) Stream removed, broadcasting: 5\n" Aug 16 20:57:50.636: INFO: stdout: "" Aug 16 20:57:50.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2283 execpodtb9gx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31960' Aug 16 20:57:52.158: INFO: stderr: "I0816 20:57:52.052759 2013 log.go:172] (0x40001122c0) (0x4000a8a000) Create stream\nI0816 20:57:52.055148 2013 log.go:172] (0x40001122c0) (0x4000a8a000) Stream added, broadcasting: 1\nI0816 20:57:52.063866 2013 log.go:172] (0x40001122c0) Reply frame received for 1\nI0816 20:57:52.064420 2013 log.go:172] (0x40001122c0) (0x4000a2a000) Create stream\nI0816 20:57:52.064481 2013 log.go:172] (0x40001122c0) (0x4000a2a000) Stream added, broadcasting: 3\nI0816 20:57:52.066332 2013 log.go:172] (0x40001122c0) Reply frame received for 3\nI0816 20:57:52.066797 2013 log.go:172] (0x40001122c0) (0x4000a8a140) Create stream\nI0816 20:57:52.066900 2013 log.go:172] (0x40001122c0) (0x4000a8a140) Stream added, broadcasting: 5\nI0816 20:57:52.068325 2013 log.go:172] (0x40001122c0) Reply frame received for 5\nI0816 20:57:52.137107 2013 log.go:172] (0x40001122c0) Data frame received for 5\nI0816 20:57:52.137309 2013 log.go:172] (0x40001122c0) Data frame received for 3\nI0816 20:57:52.137514 2013 log.go:172] (0x4000a8a140) (5) Data frame handling\nI0816 20:57:52.138333 2013 log.go:172] (0x40001122c0) Data frame received for 1\nI0816 20:57:52.138520 2013 log.go:172] (0x4000a8a000) (1) Data frame handling\nI0816 20:57:52.138739 2013 log.go:172] (0x4000a2a000) (3) Data frame handling\nI0816 20:57:52.139383 2013 log.go:172] (0x4000a8a140) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 31960\nConnection to 172.18.0.3 31960 port [tcp/31960] succeeded!\nI0816 20:57:52.140374 2013 log.go:172] (0x4000a8a000) (1) Data frame sent\nI0816 20:57:52.140853 2013 log.go:172] (0x40001122c0) Data frame received for 5\nI0816 20:57:52.140960 2013 log.go:172] (0x4000a8a140) (5) Data frame handling\nI0816 20:57:52.143104 2013 log.go:172] (0x40001122c0) (0x4000a8a000) Stream removed, broadcasting: 1\nI0816 20:57:52.145089 2013 log.go:172] (0x40001122c0) Go away received\nI0816 20:57:52.148485 2013 log.go:172] (0x40001122c0) (0x4000a8a000) Stream removed, broadcasting: 1\nI0816 20:57:52.148870 2013 log.go:172] (0x40001122c0) (0x4000a2a000) Stream removed, broadcasting: 3\nI0816 20:57:52.149107 2013 log.go:172] (0x40001122c0) (0x4000a8a140) Stream removed, broadcasting: 5\n" Aug 16 20:57:52.159: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:57:52.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2283" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:24.617 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":135,"skipped":2178,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:57:52.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0816 20:58:34.550258 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 16 20:58:34.550: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:58:34.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5476" for this suite. • [SLOW TEST:42.386 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":136,"skipped":2182,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:58:34.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:59:06.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8315" for this suite. STEP: Destroying namespace "nsdeletetest-1398" for this suite. Aug 16 20:59:06.783: INFO: Namespace nsdeletetest-1398 was already deleted STEP: Destroying namespace "nsdeletetest-7497" for this suite. • [SLOW TEST:32.223 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":137,"skipped":2195,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:59:06.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Aug 16 20:59:14.633: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1536 pod-service-account-b98236c8-41db-4f56-a029-54150bf5d486 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 16 20:59:16.565: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1536 pod-service-account-b98236c8-41db-4f56-a029-54150bf5d486 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 16 20:59:18.025: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1536 pod-service-account-b98236c8-41db-4f56-a029-54150bf5d486 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:59:19.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1536" for this suite. • [SLOW TEST:12.713 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":138,"skipped":2221,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:59:19.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 16 20:59:19.941: INFO: Waiting up to 5m0s for pod "pod-726e76a3-44fd-4262-a995-92fb03061e1e" in namespace "emptydir-7803" to be "success or failure" Aug 16 20:59:19.947: INFO: Pod "pod-726e76a3-44fd-4262-a995-92fb03061e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.40869ms Aug 16 20:59:22.047: INFO: Pod "pod-726e76a3-44fd-4262-a995-92fb03061e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105504451s Aug 16 20:59:24.053: INFO: Pod "pod-726e76a3-44fd-4262-a995-92fb03061e1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111671897s STEP: Saw pod success Aug 16 20:59:24.053: INFO: Pod "pod-726e76a3-44fd-4262-a995-92fb03061e1e" satisfied condition "success or failure" Aug 16 20:59:24.058: INFO: Trying to get logs from node jerma-worker pod pod-726e76a3-44fd-4262-a995-92fb03061e1e container test-container: STEP: delete the pod Aug 16 20:59:24.193: INFO: Waiting for pod pod-726e76a3-44fd-4262-a995-92fb03061e1e to disappear Aug 16 20:59:24.204: INFO: Pod pod-726e76a3-44fd-4262-a995-92fb03061e1e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:59:24.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7803" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2229,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:59:24.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 20:59:24.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746" in namespace "downward-api-9878" to be "success or failure" Aug 16 20:59:24.480: INFO: Pod "downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618334ms Aug 16 20:59:26.485: INFO: Pod "downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011338653s Aug 16 20:59:28.491: INFO: Pod "downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746": Phase="Running", Reason="", readiness=true. Elapsed: 4.017139739s Aug 16 20:59:30.497: INFO: Pod "downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023549117s STEP: Saw pod success Aug 16 20:59:30.498: INFO: Pod "downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746" satisfied condition "success or failure" Aug 16 20:59:30.503: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746 container client-container: STEP: delete the pod Aug 16 20:59:30.522: INFO: Waiting for pod downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746 to disappear Aug 16 20:59:30.526: INFO: Pod downwardapi-volume-2384ab89-94d0-49ad-9ca0-b0474910c746 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:59:30.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9878" for this suite. • [SLOW TEST:6.319 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2234,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:59:30.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 20:59:46.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4449" for this suite. • [SLOW TEST:16.240 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":141,"skipped":2235,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 20:59:46.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-a4ba3913-2897-4668-ba88-a46322605d37 in namespace container-probe-68 Aug 16 20:59:52.939: INFO: Started pod liveness-a4ba3913-2897-4668-ba88-a46322605d37 in namespace container-probe-68 STEP: checking the pod's current state and verifying that restartCount is present Aug 16 20:59:52.943: INFO: Initial restart count of pod liveness-a4ba3913-2897-4668-ba88-a46322605d37 is 0 Aug 16 21:00:07.419: INFO: Restart count of pod container-probe-68/liveness-a4ba3913-2897-4668-ba88-a46322605d37 is now 1 (14.475467774s elapsed) Aug 16 21:00:23.581: INFO: Restart count of pod container-probe-68/liveness-a4ba3913-2897-4668-ba88-a46322605d37 is now 2 (30.637270009s elapsed) Aug 16 21:00:44.033: INFO: Restart count of pod container-probe-68/liveness-a4ba3913-2897-4668-ba88-a46322605d37 is now 3 (51.089341487s elapsed) Aug 16 21:01:04.558: INFO: Restart count of pod container-probe-68/liveness-a4ba3913-2897-4668-ba88-a46322605d37 is now 4 (1m11.61434452s elapsed) Aug 16 21:02:07.175: INFO: Restart count of pod container-probe-68/liveness-a4ba3913-2897-4668-ba88-a46322605d37 is now 5 (2m14.231442587s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:02:07.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-68" for this suite. • [SLOW TEST:140.470 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2249,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:02:07.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 21:02:07.549: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:02:11.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7018" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2253,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:02:11.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 21:02:11.877: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 16 21:02:11.897: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:11.942: INFO: Number of nodes with available pods: 0 Aug 16 21:02:11.942: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:12.953: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:12.959: INFO: Number of nodes with available pods: 0 Aug 16 21:02:12.959: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:14.087: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:14.093: INFO: Number of nodes with available pods: 0 Aug 16 21:02:14.093: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:14.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:14.958: INFO: Number of nodes with available pods: 0 Aug 16 21:02:14.958: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:15.952: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:15.957: INFO: Number of nodes with available pods: 0 Aug 16 21:02:15.957: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:16.953: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:16.959: INFO: Number of nodes with available pods: 1 Aug 16 21:02:16.959: INFO: Node jerma-worker2 is running more than one daemon pod Aug 16 21:02:17.961: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:17.967: INFO: Number of nodes with available pods: 2 Aug 16 21:02:17.967: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 16 21:02:18.015: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:18.016: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:18.038: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:19.047: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:19.047: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:19.056: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:20.058: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:20.058: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:20.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:21.046: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:21.046: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:21.054: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:22.045: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:22.045: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:22.045: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:22.108: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:23.045: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:23.045: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:23.045: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:23.052: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:24.147: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:24.147: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:24.147: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:24.188: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:25.048: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:25.048: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:25.048: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:25.055: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:26.047: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:26.047: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:26.047: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:26.057: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:27.047: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:27.047: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:27.047: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:27.056: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:28.046: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:28.047: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:28.047: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:28.056: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:29.047: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:29.047: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:29.047: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:29.058: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:30.047: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:30.047: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:30.047: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:30.055: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:31.045: INFO: Wrong image for pod: daemon-set-2nkvb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:31.045: INFO: Pod daemon-set-2nkvb is not available Aug 16 21:02:31.046: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:31.051: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:32.046: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:32.047: INFO: Pod daemon-set-pm8k5 is not available Aug 16 21:02:32.056: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:33.074: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:33.074: INFO: Pod daemon-set-pm8k5 is not available Aug 16 21:02:33.081: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:34.061: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:34.061: INFO: Pod daemon-set-pm8k5 is not available Aug 16 21:02:34.103: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:35.045: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:35.045: INFO: Pod daemon-set-pm8k5 is not available Aug 16 21:02:35.053: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:36.080: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:36.251: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:37.046: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:37.053: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:38.164: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:38.164: INFO: Pod daemon-set-lt26s is not available Aug 16 21:02:38.173: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:39.047: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:39.047: INFO: Pod daemon-set-lt26s is not available Aug 16 21:02:39.056: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:40.075: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:40.075: INFO: Pod daemon-set-lt26s is not available Aug 16 21:02:40.084: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:41.046: INFO: Wrong image for pod: daemon-set-lt26s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Aug 16 21:02:41.046: INFO: Pod daemon-set-lt26s is not available Aug 16 21:02:41.055: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:42.046: INFO: Pod daemon-set-vzs28 is not available Aug 16 21:02:42.055: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 16 21:02:42.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:42.067: INFO: Number of nodes with available pods: 1 Aug 16 21:02:42.068: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:43.385: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:43.391: INFO: Number of nodes with available pods: 1 Aug 16 21:02:43.391: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:44.131: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:44.137: INFO: Number of nodes with available pods: 1 Aug 16 21:02:44.138: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:45.077: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:45.083: INFO: Number of nodes with available pods: 1 Aug 16 21:02:45.083: INFO: Node jerma-worker is running more than one daemon pod Aug 16 21:02:46.076: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 16 21:02:46.099: INFO: Number of nodes with available pods: 2 Aug 16 21:02:46.100: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3083, will wait for the garbage collector to delete the pods Aug 16 21:02:46.179: INFO: Deleting DaemonSet.extensions daemon-set took: 6.013951ms Aug 16 21:02:46.480: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.736051ms Aug 16 21:02:51.784: INFO: Number of nodes with available pods: 0 Aug 16 21:02:51.784: INFO: Number of running nodes: 0, number of available pods: 0 Aug 16 21:02:51.787: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3083/daemonsets","resourceVersion":"500581"},"items":null} Aug 16 21:02:51.790: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3083/pods","resourceVersion":"500581"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:02:51.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3083" for this suite. • [SLOW TEST:40.139 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":144,"skipped":2282,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:02:51.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 21:02:51.940: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-78b29b2e-c95b-4d0e-8b99-f3c249d96260" in namespace "security-context-test-7494" to be "success or failure" Aug 16 21:02:51.949: INFO: Pod "busybox-readonly-false-78b29b2e-c95b-4d0e-8b99-f3c249d96260": Phase="Pending", Reason="", readiness=false. Elapsed: 9.222906ms Aug 16 21:02:54.123: INFO: Pod "busybox-readonly-false-78b29b2e-c95b-4d0e-8b99-f3c249d96260": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182798801s Aug 16 21:02:56.438: INFO: Pod "busybox-readonly-false-78b29b2e-c95b-4d0e-8b99-f3c249d96260": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497837277s Aug 16 21:02:58.463: INFO: Pod "busybox-readonly-false-78b29b2e-c95b-4d0e-8b99-f3c249d96260": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.523190679s Aug 16 21:02:58.463: INFO: Pod "busybox-readonly-false-78b29b2e-c95b-4d0e-8b99-f3c249d96260" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:02:58.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7494" for this suite. • [SLOW TEST:6.718 seconds] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2325,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:02:58.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 21:02:58.956: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 16 21:03:17.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7586 create -f -' Aug 16 21:03:24.132: INFO: stderr: "" Aug 16 21:03:24.132: INFO: stdout: "e2e-test-crd-publish-openapi-177-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 16 21:03:24.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7586 delete e2e-test-crd-publish-openapi-177-crds test-cr' Aug 16 21:03:25.551: INFO: stderr: "" Aug 16 21:03:25.552: INFO: stdout: "e2e-test-crd-publish-openapi-177-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 16 21:03:25.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7586 apply -f -' Aug 16 21:03:27.133: INFO: stderr: "" Aug 16 21:03:27.133: INFO: stdout: "e2e-test-crd-publish-openapi-177-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 16 21:03:27.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7586 delete e2e-test-crd-publish-openapi-177-crds test-cr' Aug 16 21:03:28.406: INFO: stderr: "" Aug 16 21:03:28.406: INFO: stdout: "e2e-test-crd-publish-openapi-177-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 16 21:03:28.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-177-crds' Aug 16 21:03:30.052: INFO: stderr: "" Aug 16 21:03:30.052: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-177-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:03:48.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7586" for this suite. • [SLOW TEST:50.362 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":146,"skipped":2325,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:03:48.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4910 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4910 I0816 21:03:49.037599 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4910, replica count: 2 I0816 21:03:52.088705 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0816 21:03:55.089612 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 16 21:03:55.089: INFO: Creating new exec pod Aug 16 21:04:00.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4910 execpodms7t5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 16 21:04:01.549: INFO: stderr: "I0816 21:04:01.450528 2230 log.go:172] (0x4000afa840) (0x4000bc2000) Create stream\nI0816 21:04:01.453750 2230 log.go:172] (0x4000afa840) (0x4000bc2000) Stream added, broadcasting: 1\nI0816 21:04:01.462460 2230 log.go:172] (0x4000afa840) Reply frame received for 1\nI0816 21:04:01.462959 2230 log.go:172] (0x4000afa840) (0x4000aa6000) Create stream\nI0816 21:04:01.463017 2230 log.go:172] (0x4000afa840) (0x4000aa6000) Stream added, broadcasting: 3\nI0816 21:04:01.464685 2230 log.go:172] (0x4000afa840) Reply frame received for 3\nI0816 21:04:01.465106 2230 log.go:172] (0x4000afa840) (0x40008e1ae0) Create stream\nI0816 21:04:01.465185 2230 log.go:172] (0x4000afa840) (0x40008e1ae0) Stream added, broadcasting: 5\nI0816 21:04:01.466766 2230 log.go:172] (0x4000afa840) Reply frame received for 5\nI0816 21:04:01.533408 2230 log.go:172] (0x4000afa840) Data frame received for 3\nI0816 21:04:01.534032 2230 log.go:172] (0x4000afa840) Data frame received for 5\nI0816 21:04:01.534216 2230 log.go:172] (0x40008e1ae0) (5) Data frame handling\nI0816 21:04:01.534380 2230 log.go:172] (0x4000aa6000) (3) Data frame handling\nI0816 21:04:01.535199 2230 log.go:172] (0x4000afa840) Data frame received for 1\nI0816 21:04:01.535305 2230 log.go:172] (0x4000bc2000) (1) Data frame handling\nI0816 21:04:01.536020 2230 log.go:172] (0x4000bc2000) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0816 21:04:01.537122 2230 log.go:172] (0x40008e1ae0) (5) Data frame sent\nI0816 21:04:01.537250 2230 log.go:172] (0x4000afa840) Data frame received for 5\nI0816 21:04:01.537378 2230 log.go:172] (0x40008e1ae0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0816 21:04:01.537549 2230 log.go:172] (0x40008e1ae0) (5) Data frame sent\nI0816 21:04:01.537664 2230 log.go:172] (0x4000afa840) Data frame received for 5\nI0816 21:04:01.537827 2230 log.go:172] (0x4000afa840) (0x4000bc2000) Stream removed, broadcasting: 1\nI0816 21:04:01.538810 2230 log.go:172] (0x40008e1ae0) (5) Data frame handling\nI0816 21:04:01.539555 2230 log.go:172] (0x4000afa840) Go away received\nI0816 21:04:01.542138 2230 log.go:172] (0x4000afa840) (0x4000bc2000) Stream removed, broadcasting: 1\nI0816 21:04:01.542388 2230 log.go:172] (0x4000afa840) (0x4000aa6000) Stream removed, broadcasting: 3\nI0816 21:04:01.542560 2230 log.go:172] (0x4000afa840) (0x40008e1ae0) Stream removed, broadcasting: 5\n" Aug 16 21:04:01.550: INFO: stdout: "" Aug 16 21:04:01.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4910 execpodms7t5 -- /bin/sh -x -c nc -zv -t -w 2 10.99.11.197 80' Aug 16 21:04:03.023: INFO: stderr: "I0816 21:04:02.908266 2254 log.go:172] (0x4000570000) (0x400052f400) Create stream\nI0816 21:04:02.911607 2254 log.go:172] (0x4000570000) (0x400052f400) Stream added, broadcasting: 1\nI0816 21:04:02.925145 2254 log.go:172] (0x4000570000) Reply frame received for 1\nI0816 21:04:02.926162 2254 log.go:172] (0x4000570000) (0x4000960000) Create stream\nI0816 21:04:02.926291 2254 log.go:172] (0x4000570000) (0x4000960000) Stream added, broadcasting: 3\nI0816 21:04:02.928293 2254 log.go:172] (0x4000570000) Reply frame received for 3\nI0816 21:04:02.928928 2254 log.go:172] (0x4000570000) (0x4000772000) Create stream\nI0816 21:04:02.929047 2254 log.go:172] (0x4000570000) (0x4000772000) Stream added, broadcasting: 5\nI0816 21:04:02.930779 2254 log.go:172] (0x4000570000) Reply frame received for 5\nI0816 21:04:03.003585 2254 log.go:172] (0x4000570000) Data frame received for 5\nI0816 21:04:03.003940 2254 log.go:172] (0x4000570000) Data frame received for 3\nI0816 21:04:03.004046 2254 log.go:172] (0x4000960000) (3) Data frame handling\nI0816 21:04:03.004258 2254 log.go:172] (0x4000772000) (5) Data frame handling\nI0816 21:04:03.005005 2254 log.go:172] (0x4000570000) Data frame received for 1\nI0816 21:04:03.005158 2254 log.go:172] (0x400052f400) (1) Data frame handling\nI0816 21:04:03.006249 2254 log.go:172] (0x400052f400) (1) Data frame sent\n+ nc -zv -t -w 2 10.99.11.197 80\nConnection to 10.99.11.197 80 port [tcp/http] succeeded!\nI0816 21:04:03.007078 2254 log.go:172] (0x4000772000) (5) Data frame sent\nI0816 21:04:03.007207 2254 log.go:172] (0x4000570000) Data frame received for 5\nI0816 21:04:03.007312 2254 log.go:172] (0x4000772000) (5) Data frame handling\nI0816 21:04:03.007777 2254 log.go:172] (0x4000570000) (0x400052f400) Stream removed, broadcasting: 1\nI0816 21:04:03.010717 2254 log.go:172] (0x4000570000) Go away received\nI0816 21:04:03.012844 2254 log.go:172] (0x4000570000) (0x400052f400) Stream removed, broadcasting: 1\nI0816 21:04:03.013678 2254 log.go:172] (0x4000570000) (0x4000960000) Stream removed, broadcasting: 3\nI0816 21:04:03.014291 2254 log.go:172] (0x4000570000) (0x4000772000) Stream removed, broadcasting: 5\n" Aug 16 21:04:03.024: INFO: stdout: "" Aug 16 21:04:03.024: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:04:03.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4910" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.306 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":147,"skipped":2334,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:04:03.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 16 21:04:03.341: INFO: >>> kubeConfig: /root/.kube/config Aug 16 21:04:22.466: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:05:30.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7655" for this suite. • [SLOW TEST:87.508 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":148,"skipped":2335,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:05:30.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 16 21:05:32.516: INFO: Pod name wrapped-volume-race-cc3390a0-02e9-4b9b-a710-102546e65b01: Found 0 pods out of 5 Aug 16 21:05:37.563: INFO: Pod name wrapped-volume-race-cc3390a0-02e9-4b9b-a710-102546e65b01: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cc3390a0-02e9-4b9b-a710-102546e65b01 in namespace emptydir-wrapper-9171, will wait for the garbage collector to delete the pods Aug 16 21:05:51.672: INFO: Deleting ReplicationController wrapped-volume-race-cc3390a0-02e9-4b9b-a710-102546e65b01 took: 8.208707ms Aug 16 21:05:52.073: INFO: Terminating ReplicationController wrapped-volume-race-cc3390a0-02e9-4b9b-a710-102546e65b01 pods took: 400.968857ms STEP: Creating RC which spawns configmap-volume pods Aug 16 21:06:02.697: INFO: Pod name wrapped-volume-race-455c5eeb-9e19-4348-983c-89b301efdc9e: Found 1 pods out of 5 Aug 16 21:06:07.713: INFO: Pod name wrapped-volume-race-455c5eeb-9e19-4348-983c-89b301efdc9e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-455c5eeb-9e19-4348-983c-89b301efdc9e in namespace emptydir-wrapper-9171, will wait for the garbage collector to delete the pods Aug 16 21:06:25.856: INFO: Deleting ReplicationController wrapped-volume-race-455c5eeb-9e19-4348-983c-89b301efdc9e took: 26.736637ms Aug 16 21:06:26.257: INFO: Terminating ReplicationController wrapped-volume-race-455c5eeb-9e19-4348-983c-89b301efdc9e pods took: 400.87155ms STEP: Creating RC which spawns configmap-volume pods Aug 16 21:06:41.937: INFO: Pod name wrapped-volume-race-4aed2696-06a4-4563-9615-d35a47f74d73: Found 0 pods out of 5 Aug 16 21:06:47.011: INFO: Pod name wrapped-volume-race-4aed2696-06a4-4563-9615-d35a47f74d73: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4aed2696-06a4-4563-9615-d35a47f74d73 in namespace emptydir-wrapper-9171, will wait for the garbage collector to delete the pods Aug 16 21:07:06.945: INFO: Deleting ReplicationController wrapped-volume-race-4aed2696-06a4-4563-9615-d35a47f74d73 took: 568.777725ms Aug 16 21:07:08.647: INFO: Terminating ReplicationController wrapped-volume-race-4aed2696-06a4-4563-9615-d35a47f74d73 pods took: 1.702150682s STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:07:28.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9171" for this suite. • [SLOW TEST:117.448 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":149,"skipped":2340,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:07:28.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f0a50006-accf-4696-a9eb-09d7b70a9cde STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f0a50006-accf-4696-a9eb-09d7b70a9cde STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:08:55.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5976" for this suite. • [SLOW TEST:87.756 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2397,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:08:55.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:09:00.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9058" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":151,"skipped":2416,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:09:00.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 21:09:03.454: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 21:09:05.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208943, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208943, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208943, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208943, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:09:07.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208943, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208943, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208943, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208943, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 21:09:10.566: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:09:23.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5703" for this suite. STEP: Destroying namespace "webhook-5703-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.205 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":152,"skipped":2420,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:09:23.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 21:09:23.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836" in namespace "downward-api-1037" to be "success or failure" Aug 16 21:09:23.576: INFO: Pod "downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836": Phase="Pending", Reason="", readiness=false. Elapsed: 35.123224ms Aug 16 21:09:25.582: INFO: Pod "downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041087376s Aug 16 21:09:27.589: INFO: Pod "downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048501422s Aug 16 21:09:29.594: INFO: Pod "downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05322142s STEP: Saw pod success Aug 16 21:09:29.594: INFO: Pod "downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836" satisfied condition "success or failure" Aug 16 21:09:29.598: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836 container client-container: STEP: delete the pod Aug 16 21:09:29.646: INFO: Waiting for pod downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836 to disappear Aug 16 21:09:29.659: INFO: Pod downwardapi-volume-653798fa-812f-46a8-84b8-dc96cfbe5836 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:09:29.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1037" for this suite. • [SLOW TEST:6.276 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2430,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:09:29.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-8988b415-33b8-49a8-aa7b-abb4383a59f9 STEP: Creating a pod to test consume configMaps Aug 16 21:09:29.875: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6" in namespace "projected-4840" to be "success or failure" Aug 16 21:09:29.889: INFO: Pod "pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.896733ms Aug 16 21:09:31.972: INFO: Pod "pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096466411s Aug 16 21:09:33.977: INFO: Pod "pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101458226s Aug 16 21:09:35.983: INFO: Pod "pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107766263s STEP: Saw pod success Aug 16 21:09:35.983: INFO: Pod "pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6" satisfied condition "success or failure" Aug 16 21:09:35.988: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6 container projected-configmap-volume-test: STEP: delete the pod Aug 16 21:09:36.217: INFO: Waiting for pod pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6 to disappear Aug 16 21:09:36.276: INFO: Pod pod-projected-configmaps-5784dd9d-7d79-4c99-b267-f1a7aee443b6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:09:36.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4840" for this suite. • [SLOW TEST:6.576 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2451,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:09:36.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 16 21:09:36.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d" in namespace "projected-7322" to be "success or failure" Aug 16 21:09:36.582: INFO: Pod "downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.85043ms Aug 16 21:09:38.595: INFO: Pod "downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019051262s Aug 16 21:09:41.482: INFO: Pod "downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.906195944s Aug 16 21:09:43.489: INFO: Pod "downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.912680775s Aug 16 21:09:45.493: INFO: Pod "downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.916671982s STEP: Saw pod success Aug 16 21:09:45.493: INFO: Pod "downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d" satisfied condition "success or failure" Aug 16 21:09:45.509: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d container client-container: STEP: delete the pod Aug 16 21:09:45.585: INFO: Waiting for pod downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d to disappear Aug 16 21:09:45.611: INFO: Pod downwardapi-volume-782bf37b-047d-4ad8-97a8-fa0491db451d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:09:45.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7322" for this suite. • [SLOW TEST:9.341 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2477,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:09:45.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 16 21:09:46.136: INFO: Waiting up to 5m0s for pod "pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f" in namespace "emptydir-6599" to be "success or failure" Aug 16 21:09:46.156: INFO: Pod "pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.15191ms Aug 16 21:09:48.296: INFO: Pod "pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160275372s Aug 16 21:09:50.399: INFO: Pod "pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f": Phase="Running", Reason="", readiness=true. Elapsed: 4.263148587s Aug 16 21:09:52.406: INFO: Pod "pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.269515364s STEP: Saw pod success Aug 16 21:09:52.406: INFO: Pod "pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f" satisfied condition "success or failure" Aug 16 21:09:52.411: INFO: Trying to get logs from node jerma-worker pod pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f container test-container: STEP: delete the pod Aug 16 21:09:52.439: INFO: Waiting for pod pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f to disappear Aug 16 21:09:52.461: INFO: Pod pod-f85f9267-2411-49a0-8a7f-b7a8fff2804f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:09:52.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6599" for this suite. • [SLOW TEST:6.841 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2509,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:09:52.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 16 21:09:52.820: INFO: PodSpec: initContainers in spec.initContainers Aug 16 21:10:43.901: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-52a93d4e-fde9-4a2f-8df8-f1dc809cd96f", GenerateName:"", Namespace:"init-container-224", SelfLink:"/api/v1/namespaces/init-container-224/pods/pod-init-52a93d4e-fde9-4a2f-8df8-f1dc809cd96f", UID:"e2258fe5-1324-4e78-95b4-f9ce33319477", ResourceVersion:"503125", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733208992, loc:(*time.Location)(0x726af60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"818903429"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-95thz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x40051c0b00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-95thz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-95thz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-95thz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4006a09e68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002bb1260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4006a09ef0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4006a09f10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4006a09f18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4006a09f1c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208993, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208993, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208993, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733208992, loc:(*time.Location)(0x726af60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.1.241", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.241"}}, StartTime:(*v1.Time)(0x40055adda0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40009a69a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40009a6bd0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f93b7ca4f6428e5511f426284dd72524a6d94f77976bf1cbfbf0d50eda9150df", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40055adde0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40055addc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0x4006a09f9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:10:43.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-224" for this suite. • [SLOW TEST:51.458 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":157,"skipped":2516,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:10:43.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check is all data is printed [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 21:10:44.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 16 21:10:45.480: INFO: stderr: "" Aug 16 21:10:45.481: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:10:45.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-295" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":158,"skipped":2518,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:10:45.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 16 21:10:45.647: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-watch-closed 31383b3a-6dba-4c16-88ef-8e048061ff0e 503143 0 2020-08-16 21:10:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 16 21:10:45.648: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-watch-closed 31383b3a-6dba-4c16-88ef-8e048061ff0e 503144 0 2020-08-16 21:10:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 16 21:10:45.661: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-watch-closed 31383b3a-6dba-4c16-88ef-8e048061ff0e 503145 0 2020-08-16 21:10:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 16 21:10:45.662: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6071 /api/v1/namespaces/watch-6071/configmaps/e2e-watch-test-watch-closed 31383b3a-6dba-4c16-88ef-8e048061ff0e 503146 0 2020-08-16 21:10:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:10:45.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6071" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":159,"skipped":2562,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:10:45.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 16 21:10:48.237: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 16 21:10:50.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209048, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209048, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209048, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209048, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 16 21:10:54.009: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 21:10:54.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6368-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:10:56.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5724" for this suite. STEP: Destroying namespace "webhook-5724-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":160,"skipped":2585,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:10:56.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 21:10:56.937: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 16 21:11:02.192: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 16 21:11:02.193: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 16 21:11:04.214: INFO: Creating deployment "test-rollover-deployment" Aug 16 21:11:04.412: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 16 21:11:07.038: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 16 21:11:07.048: INFO: Ensure that both replica sets have 1 created replica Aug 16 21:11:07.058: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 16 21:11:07.074: INFO: Updating deployment test-rollover-deployment Aug 16 21:11:07.075: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 16 21:11:09.087: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 16 21:11:09.098: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 16 21:11:09.113: INFO: all replica sets need to contain the pod-template-hash label Aug 16 21:11:09.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209067, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:11:11.180: INFO: all replica sets need to contain the pod-template-hash label Aug 16 21:11:11.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209067, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:11:13.126: INFO: all replica sets need to contain the pod-template-hash label Aug 16 21:11:13.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209072, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:11:15.199: INFO: all replica sets need to contain the pod-template-hash label Aug 16 21:11:15.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209072, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:11:17.129: INFO: all replica sets need to contain the pod-template-hash label Aug 16 21:11:17.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209072, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:11:19.129: INFO: all replica sets need to contain the pod-template-hash label Aug 16 21:11:19.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209072, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:11:21.487: INFO: all replica sets need to contain the pod-template-hash label Aug 16 21:11:21.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209072, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:11:24.524: INFO: Aug 16 21:11:24.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209083, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209065, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 16 21:11:25.194: INFO: Aug 16 21:11:25.194: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Aug 16 21:11:25.206: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1750 /apis/apps/v1/namespaces/deployment-1750/deployments/test-rollover-deployment 7ea7774c-3bef-4aa0-bf1d-4501daf6ca3d 503423 2 2020-08-16 21:11:04 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40038b49c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-16 21:11:05 +0000 UTC,LastTransitionTime:2020-08-16 21:11:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-16 21:11:24 +0000 UTC,LastTransitionTime:2020-08-16 21:11:05 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 16 21:11:25.215: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-1750 /apis/apps/v1/namespaces/deployment-1750/replicasets/test-rollover-deployment-574d6dfbff f945db52-26b3-4610-b990-83205f05f8d5 503409 2 2020-08-16 21:11:07 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 7ea7774c-3bef-4aa0-bf1d-4501daf6ca3d 0x40038b4e27 0x40038b4e28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40038b4e98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 16 21:11:25.215: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 16 21:11:25.217: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1750 /apis/apps/v1/namespaces/deployment-1750/replicasets/test-rollover-controller dc7f6a05-6f76-4b38-bb4c-eea23107b5a7 503419 2 2020-08-16 21:10:56 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 7ea7774c-3bef-4aa0-bf1d-4501daf6ca3d 0x40038b4d57 0x40038b4d58}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x40038b4db8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 16 21:11:25.218: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-1750 /apis/apps/v1/namespaces/deployment-1750/replicasets/test-rollover-deployment-f6c94f66c 5a5d15ba-d16b-41f3-b33e-1af9cfd74083 503353 2 2020-08-16 21:11:04 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 7ea7774c-3bef-4aa0-bf1d-4501daf6ca3d 0x40038b4f00 0x40038b4f01}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40038b4f78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 16 21:11:25.225: INFO: Pod "test-rollover-deployment-574d6dfbff-g4dhh" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-g4dhh test-rollover-deployment-574d6dfbff- deployment-1750 /api/v1/namespaces/deployment-1750/pods/test-rollover-deployment-574d6dfbff-g4dhh e0310944-9799-4d1f-b582-81f20b67de80 503375 0 2020-08-16 21:11:07 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff f945db52-26b3-4610-b990-83205f05f8d5 0x40037db807 0x40037db808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4w8xr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4w8xr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4w8xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:11:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:11:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:11:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:11:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.244,StartTime:2020-08-16 21:11:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:11:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://fc48b7123181fea95bc2d440e03b67ae22ffd4613e2962ec9bc86c66b66a9fea,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 16 21:11:25.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1750" for this suite. • [SLOW TEST:28.481 seconds] [sig-apps] Deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":161,"skipped":2610,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 16 21:11:25.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 16 21:11:26.277: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:11:35.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8248" for this suite.

• [SLOW TEST:8.082 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":163,"skipped":2648,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:11:35.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:11:41.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1839" for this suite.

• [SLOW TEST:6.165 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2666,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:11:41.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:11:45.359: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 16 21:11:47.485: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209105, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209105, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209105, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209105, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:11:49.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209105, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209105, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209105, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209105, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:11:52.566: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:11:52.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1465-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:11:54.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3838" for this suite.
STEP: Destroying namespace "webhook-3838-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.634 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":165,"skipped":2669,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:11:54.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 16 21:11:54.700: INFO: Waiting up to 5m0s for pod "downward-api-59d2b26f-70b8-4739-b95f-4bc1058bfcb5" in namespace "downward-api-7780" to be "success or failure"
Aug 16 21:11:54.721: INFO: Pod "downward-api-59d2b26f-70b8-4739-b95f-4bc1058bfcb5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.537714ms
Aug 16 21:11:56.728: INFO: Pod "downward-api-59d2b26f-70b8-4739-b95f-4bc1058bfcb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027554038s
Aug 16 21:11:58.735: INFO: Pod "downward-api-59d2b26f-70b8-4739-b95f-4bc1058bfcb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034380752s
STEP: Saw pod success
Aug 16 21:11:58.735: INFO: Pod "downward-api-59d2b26f-70b8-4739-b95f-4bc1058bfcb5" satisfied condition "success or failure"
Aug 16 21:11:58.740: INFO: Trying to get logs from node jerma-worker pod downward-api-59d2b26f-70b8-4739-b95f-4bc1058bfcb5 container dapi-container: 
STEP: delete the pod
Aug 16 21:11:58.880: INFO: Waiting for pod downward-api-59d2b26f-70b8-4739-b95f-4bc1058bfcb5 to disappear
Aug 16 21:11:58.896: INFO: Pod downward-api-59d2b26f-70b8-4739-b95f-4bc1058bfcb5 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:11:58.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7780" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2684,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:11:58.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 16 21:11:58.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2716'
Aug 16 21:12:01.002: INFO: stderr: ""
Aug 16 21:12:01.002: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 16 21:12:02.010: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:12:02.010: INFO: Found 0 / 1
Aug 16 21:12:03.010: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:12:03.011: INFO: Found 0 / 1
Aug 16 21:12:04.023: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:12:04.024: INFO: Found 0 / 1
Aug 16 21:12:05.011: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:12:05.012: INFO: Found 1 / 1
Aug 16 21:12:05.012: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 16 21:12:05.019: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:12:05.020: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 16 21:12:05.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-stmsj --namespace=kubectl-2716 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 16 21:12:06.261: INFO: stderr: ""
Aug 16 21:12:06.261: INFO: stdout: "pod/agnhost-master-stmsj patched\n"
STEP: checking annotations
Aug 16 21:12:06.293: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:12:06.293: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:12:06.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2716" for this suite.

• [SLOW TEST:7.398 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":167,"skipped":2706,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:12:06.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0
Aug 16 21:12:06.460: INFO: Pod name my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0: Found 0 pods out of 1
Aug 16 21:12:11.778: INFO: Pod name my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0: Found 1 pods out of 1
Aug 16 21:12:11.779: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0" are running
Aug 16 21:12:11.785: INFO: Pod "my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0-lf876" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 21:12:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 21:12:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 21:12:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 21:12:06 +0000 UTC Reason: Message:}])
Aug 16 21:12:11.786: INFO: Trying to dial the pod
Aug 16 21:12:16.810: INFO: Controller my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0: Got expected result from replica 1 [my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0-lf876]: "my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0-lf876", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:12:16.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7221" for this suite.

• [SLOW TEST:10.512 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":168,"skipped":2746,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:12:16.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 16 21:12:16.920: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 16 21:12:16.949: INFO: Waiting for terminating namespaces to be deleted...
Aug 16 21:12:16.953: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 16 21:12:16.967: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:12:16.967: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 16 21:12:16.967: INFO: agnhost-master-stmsj from kubectl-2716 started at 2020-08-16 21:12:01 +0000 UTC (1 container statuses recorded)
Aug 16 21:12:16.967: INFO: 	Container agnhost-master ready: false, restart count 0
Aug 16 21:12:16.967: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:12:16.967: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 16 21:12:16.967: INFO: my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0-lf876 from replication-controller-7221 started at 2020-08-16 21:12:06 +0000 UTC (1 container statuses recorded)
Aug 16 21:12:16.967: INFO: 	Container my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0 ready: true, restart count 0
Aug 16 21:12:16.967: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 16 21:12:16.980: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:12:16.981: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 16 21:12:16.981: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:12:16.981: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 16 21:12:16.981: INFO: busybox-readonly-fs5f89efcf-9658-4ada-9313-8df11de20a2a from kubelet-test-1839 started at 2020-08-16 21:11:35 +0000 UTC (1 container statuses recorded)
Aug 16 21:12:16.981: INFO: 	Container busybox-readonly-fs5f89efcf-9658-4ada-9313-8df11de20a2a ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Aug 16 21:12:17.094: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2
Aug 16 21:12:17.094: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker
Aug 16 21:12:17.094: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2
Aug 16 21:12:17.094: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker
Aug 16 21:12:17.094: INFO: Pod agnhost-master-stmsj requesting resource cpu=0m on Node jerma-worker
Aug 16 21:12:17.094: INFO: Pod busybox-readonly-fs5f89efcf-9658-4ada-9313-8df11de20a2a requesting resource cpu=0m on Node jerma-worker2
Aug 16 21:12:17.094: INFO: Pod my-hostname-basic-5dcb7ec9-2917-417f-b72b-317aa9cd03c0-lf876 requesting resource cpu=0m on Node jerma-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug 16 21:12:17.094: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Aug 16 21:12:17.102: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-203072a9-8fb5-4b08-b0a4-f60ee201693a.162bdbf4c1feedc9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9131/filler-pod-203072a9-8fb5-4b08-b0a4-f60ee201693a to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-203072a9-8fb5-4b08-b0a4-f60ee201693a.162bdbf53688a661], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-203072a9-8fb5-4b08-b0a4-f60ee201693a.162bdbf5c466acb1], Reason = [Created], Message = [Created container filler-pod-203072a9-8fb5-4b08-b0a4-f60ee201693a]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-203072a9-8fb5-4b08-b0a4-f60ee201693a.162bdbf5de32c7b1], Reason = [Started], Message = [Started container filler-pod-203072a9-8fb5-4b08-b0a4-f60ee201693a]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6108859d-8e88-4492-a807-ac5a8383e530.162bdbf4c42372f4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9131/filler-pod-6108859d-8e88-4492-a807-ac5a8383e530 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6108859d-8e88-4492-a807-ac5a8383e530.162bdbf57133cee2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6108859d-8e88-4492-a807-ac5a8383e530.162bdbf5cbd0d10c], Reason = [Created], Message = [Created container filler-pod-6108859d-8e88-4492-a807-ac5a8383e530]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6108859d-8e88-4492-a807-ac5a8383e530.162bdbf5e20e9185], Reason = [Started], Message = [Started container filler-pod-6108859d-8e88-4492-a807-ac5a8383e530]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162bdbf62df40763], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:12:24.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9131" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:7.786 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":169,"skipped":2756,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:12:24.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8843 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8843;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8843 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8843;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8843.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8843.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8843.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8843.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8843.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8843.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8843.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8843.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8843.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8843.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 185.86.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.86.185_udp@PTR;check="$$(dig +tcp +noall +answer +search 185.86.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.86.185_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8843 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8843;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8843 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8843;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8843.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8843.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8843.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8843.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8843.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8843.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8843.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8843.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8843.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8843.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8843.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 185.86.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.86.185_udp@PTR;check="$$(dig +tcp +noall +answer +search 185.86.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.86.185_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 16 21:12:33.185: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.189: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.194: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.201: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.205: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.209: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.214: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.246: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.251: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.255: INFO: Unable to read jessie_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.259: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.263: INFO: Unable to read jessie_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.267: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.271: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:33.296: INFO: Lookups using dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8843 wheezy_tcp@dns-test-service.dns-8843 wheezy_udp@dns-test-service.dns-8843.svc wheezy_tcp@dns-test-service.dns-8843.svc wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8843 jessie_tcp@dns-test-service.dns-8843 jessie_udp@dns-test-service.dns-8843.svc jessie_tcp@dns-test-service.dns-8843.svc jessie_udp@_http._tcp.dns-test-service.dns-8843.svc jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc]

Aug 16 21:12:38.303: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.308: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.313: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.322: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.327: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.332: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.335: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.362: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.366: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.371: INFO: Unable to read jessie_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.379: INFO: Unable to read jessie_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.383: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.388: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.392: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:38.419: INFO: Lookups using dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8843 wheezy_tcp@dns-test-service.dns-8843 wheezy_udp@dns-test-service.dns-8843.svc wheezy_tcp@dns-test-service.dns-8843.svc wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8843 jessie_tcp@dns-test-service.dns-8843 jessie_udp@dns-test-service.dns-8843.svc jessie_tcp@dns-test-service.dns-8843.svc jessie_udp@_http._tcp.dns-test-service.dns-8843.svc jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc]

Aug 16 21:12:43.303: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.307: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.313: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.322: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.325: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.328: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.332: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.359: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.363: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.367: INFO: Unable to read jessie_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.374: INFO: Unable to read jessie_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.378: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.382: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.386: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:43.413: INFO: Lookups using dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8843 wheezy_tcp@dns-test-service.dns-8843 wheezy_udp@dns-test-service.dns-8843.svc wheezy_tcp@dns-test-service.dns-8843.svc wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8843 jessie_tcp@dns-test-service.dns-8843 jessie_udp@dns-test-service.dns-8843.svc jessie_tcp@dns-test-service.dns-8843.svc jessie_udp@_http._tcp.dns-test-service.dns-8843.svc jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc]

Aug 16 21:12:48.303: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.309: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.314: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.322: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.325: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.329: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.333: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.363: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.367: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.371: INFO: Unable to read jessie_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.374: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.378: INFO: Unable to read jessie_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.382: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.386: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.389: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:48.413: INFO: Lookups using dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8843 wheezy_tcp@dns-test-service.dns-8843 wheezy_udp@dns-test-service.dns-8843.svc wheezy_tcp@dns-test-service.dns-8843.svc wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8843 jessie_tcp@dns-test-service.dns-8843 jessie_udp@dns-test-service.dns-8843.svc jessie_tcp@dns-test-service.dns-8843.svc jessie_udp@_http._tcp.dns-test-service.dns-8843.svc jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc]

Aug 16 21:12:53.302: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.307: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.311: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.315: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.318: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.322: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.325: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.329: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.380: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.503: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.507: INFO: Unable to read jessie_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.515: INFO: Unable to read jessie_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.519: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.522: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:53.553: INFO: Lookups using dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8843 wheezy_tcp@dns-test-service.dns-8843 wheezy_udp@dns-test-service.dns-8843.svc wheezy_tcp@dns-test-service.dns-8843.svc wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8843 jessie_tcp@dns-test-service.dns-8843 jessie_udp@dns-test-service.dns-8843.svc jessie_tcp@dns-test-service.dns-8843.svc jessie_udp@_http._tcp.dns-test-service.dns-8843.svc jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc]

Aug 16 21:12:58.303: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.308: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.313: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.321: INFO: Unable to read wheezy_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.325: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.329: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.332: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.361: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.366: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.370: INFO: Unable to read jessie_udp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843 from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.376: INFO: Unable to read jessie_udp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.380: INFO: Unable to read jessie_tcp@dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.383: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.386: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc from pod dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925: the server could not find the requested resource (get pods dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925)
Aug 16 21:12:58.408: INFO: Lookups using dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8843 wheezy_tcp@dns-test-service.dns-8843 wheezy_udp@dns-test-service.dns-8843.svc wheezy_tcp@dns-test-service.dns-8843.svc wheezy_udp@_http._tcp.dns-test-service.dns-8843.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8843.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8843 jessie_tcp@dns-test-service.dns-8843 jessie_udp@dns-test-service.dns-8843.svc jessie_tcp@dns-test-service.dns-8843.svc jessie_udp@_http._tcp.dns-test-service.dns-8843.svc jessie_tcp@_http._tcp.dns-test-service.dns-8843.svc]

Aug 16 21:13:03.869: INFO: DNS probes using dns-8843/dns-test-3b82682f-7c79-40a9-b5a3-51f0abadc925 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:13:07.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8843" for this suite.

• [SLOW TEST:43.603 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":170,"skipped":2786,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:13:08.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 16 21:13:09.130: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:13:23.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-622" for this suite.

• [SLOW TEST:15.579 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":171,"skipped":2787,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:13:23.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:13:24.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 16 21:13:44.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1722 create -f -'
Aug 16 21:13:49.529: INFO: stderr: ""
Aug 16 21:13:49.529: INFO: stdout: "e2e-test-crd-publish-openapi-8301-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 16 21:13:49.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1722 delete e2e-test-crd-publish-openapi-8301-crds test-foo'
Aug 16 21:13:51.269: INFO: stderr: ""
Aug 16 21:13:51.270: INFO: stdout: "e2e-test-crd-publish-openapi-8301-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 16 21:13:51.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1722 apply -f -'
Aug 16 21:13:53.021: INFO: stderr: ""
Aug 16 21:13:53.021: INFO: stdout: "e2e-test-crd-publish-openapi-8301-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 16 21:13:53.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1722 delete e2e-test-crd-publish-openapi-8301-crds test-foo'
Aug 16 21:13:54.364: INFO: stderr: ""
Aug 16 21:13:54.364: INFO: stdout: "e2e-test-crd-publish-openapi-8301-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 16 21:13:54.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1722 create -f -'
Aug 16 21:13:56.111: INFO: rc: 1
Aug 16 21:13:56.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1722 apply -f -'
Aug 16 21:13:57.676: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 16 21:13:57.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1722 create -f -'
Aug 16 21:13:59.206: INFO: rc: 1
Aug 16 21:13:59.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1722 apply -f -'
Aug 16 21:14:00.897: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 16 21:14:00.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8301-crds'
Aug 16 21:14:02.527: INFO: stderr: ""
Aug 16 21:14:02.528: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8301-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 16 21:14:02.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8301-crds.metadata'
Aug 16 21:14:04.507: INFO: stderr: ""
Aug 16 21:14:04.508: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8301-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 16 21:14:04.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8301-crds.spec'
Aug 16 21:14:06.386: INFO: stderr: ""
Aug 16 21:14:06.386: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8301-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 16 21:14:06.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8301-crds.spec.bars'
Aug 16 21:14:08.683: INFO: stderr: ""
Aug 16 21:14:08.684: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8301-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 16 21:14:08.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8301-crds.spec.bars2'
Aug 16 21:14:10.587: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:14:29.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1722" for this suite.

• [SLOW TEST:65.538 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":172,"skipped":2829,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:14:29.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:14:38.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1641" for this suite.

• [SLOW TEST:9.000 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2842,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:14:38.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 16 21:14:38.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:38.709: INFO: Number of nodes with available pods: 0
Aug 16 21:14:38.709: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:39.718: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:39.723: INFO: Number of nodes with available pods: 0
Aug 16 21:14:39.723: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:40.901: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:40.974: INFO: Number of nodes with available pods: 0
Aug 16 21:14:40.974: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:41.829: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:41.832: INFO: Number of nodes with available pods: 0
Aug 16 21:14:41.832: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:42.716: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:42.719: INFO: Number of nodes with available pods: 0
Aug 16 21:14:42.719: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:43.717: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:43.721: INFO: Number of nodes with available pods: 2
Aug 16 21:14:43.721: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 16 21:14:45.309: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:45.404: INFO: Number of nodes with available pods: 1
Aug 16 21:14:45.404: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:46.463: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:46.468: INFO: Number of nodes with available pods: 1
Aug 16 21:14:46.468: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:47.413: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:47.640: INFO: Number of nodes with available pods: 1
Aug 16 21:14:47.640: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:48.413: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:48.417: INFO: Number of nodes with available pods: 1
Aug 16 21:14:48.417: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:14:49.409: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:14:49.413: INFO: Number of nodes with available pods: 2
Aug 16 21:14:49.413: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8156, will wait for the garbage collector to delete the pods
Aug 16 21:14:49.476: INFO: Deleting DaemonSet.extensions daemon-set took: 6.14993ms
Aug 16 21:14:49.877: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.748214ms
Aug 16 21:15:01.781: INFO: Number of nodes with available pods: 0
Aug 16 21:15:01.781: INFO: Number of running nodes: 0, number of available pods: 0
Aug 16 21:15:01.785: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8156/daemonsets","resourceVersion":"504505"},"items":null}

Aug 16 21:15:01.788: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8156/pods","resourceVersion":"504505"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:15:01.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8156" for this suite.

• [SLOW TEST:23.494 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":174,"skipped":2913,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:15:01.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Aug 16 21:15:01.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7966'
Aug 16 21:15:03.571: INFO: stderr: ""
Aug 16 21:15:03.571: INFO: stdout: "pod/pause created\n"
Aug 16 21:15:03.571: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 16 21:15:03.571: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7966" to be "running and ready"
Aug 16 21:15:03.588: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 16.772462ms
Aug 16 21:15:05.653: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081329521s
Aug 16 21:15:07.658: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.086990203s
Aug 16 21:15:07.658: INFO: Pod "pause" satisfied condition "running and ready"
Aug 16 21:15:07.659: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 16 21:15:07.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7966'
Aug 16 21:15:08.863: INFO: stderr: ""
Aug 16 21:15:08.863: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 16 21:15:08.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7966'
Aug 16 21:15:10.046: INFO: stderr: ""
Aug 16 21:15:10.046: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 16 21:15:10.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7966'
Aug 16 21:15:11.247: INFO: stderr: ""
Aug 16 21:15:11.248: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 16 21:15:11.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7966'
Aug 16 21:15:12.449: INFO: stderr: ""
Aug 16 21:15:12.449: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Aug 16 21:15:12.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7966'
Aug 16 21:15:13.651: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 16 21:15:13.651: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 16 21:15:13.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7966'
Aug 16 21:15:15.057: INFO: stderr: "No resources found in kubectl-7966 namespace.\n"
Aug 16 21:15:15.057: INFO: stdout: ""
Aug 16 21:15:15.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7966 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 16 21:15:16.330: INFO: stderr: ""
Aug 16 21:15:16.330: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:15:16.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7966" for this suite.

• [SLOW TEST:14.502 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":175,"skipped":2921,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:15:16.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 16 21:15:16.479: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:15:26.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5507" for this suite.

• [SLOW TEST:9.963 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":176,"skipped":2960,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:15:26.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:15:26.565: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 16 21:15:28.840: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:15:28.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4596" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":177,"skipped":3013,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:15:28.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8206
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 16 21:15:28.968: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 16 21:15:55.869: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:8080/dial?request=hostname&protocol=udp&host=10.244.2.240&port=8081&tries=1'] Namespace:pod-network-test-8206 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:15:55.869: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:15:55.923853       7 log.go:172] (0x40029122c0) (0x4002e872c0) Create stream
I0816 21:15:55.923957       7 log.go:172] (0x40029122c0) (0x4002e872c0) Stream added, broadcasting: 1
I0816 21:15:55.926648       7 log.go:172] (0x40029122c0) Reply frame received for 1
I0816 21:15:55.926779       7 log.go:172] (0x40029122c0) (0x4001411400) Create stream
I0816 21:15:55.926863       7 log.go:172] (0x40029122c0) (0x4001411400) Stream added, broadcasting: 3
I0816 21:15:55.927959       7 log.go:172] (0x40029122c0) Reply frame received for 3
I0816 21:15:55.928076       7 log.go:172] (0x40029122c0) (0x4002e87360) Create stream
I0816 21:15:55.928133       7 log.go:172] (0x40029122c0) (0x4002e87360) Stream added, broadcasting: 5
I0816 21:15:55.929512       7 log.go:172] (0x40029122c0) Reply frame received for 5
I0816 21:15:55.999562       7 log.go:172] (0x40029122c0) Data frame received for 3
I0816 21:15:55.999656       7 log.go:172] (0x4001411400) (3) Data frame handling
I0816 21:15:55.999717       7 log.go:172] (0x4001411400) (3) Data frame sent
I0816 21:15:55.999780       7 log.go:172] (0x40029122c0) Data frame received for 3
I0816 21:15:55.999845       7 log.go:172] (0x4001411400) (3) Data frame handling
I0816 21:15:56.000081       7 log.go:172] (0x40029122c0) Data frame received for 5
I0816 21:15:56.000227       7 log.go:172] (0x4002e87360) (5) Data frame handling
I0816 21:15:56.001246       7 log.go:172] (0x40029122c0) Data frame received for 1
I0816 21:15:56.001369       7 log.go:172] (0x4002e872c0) (1) Data frame handling
I0816 21:15:56.001500       7 log.go:172] (0x4002e872c0) (1) Data frame sent
I0816 21:15:56.001606       7 log.go:172] (0x40029122c0) (0x4002e872c0) Stream removed, broadcasting: 1
I0816 21:15:56.001728       7 log.go:172] (0x40029122c0) Go away received
I0816 21:15:56.001937       7 log.go:172] (0x40029122c0) (0x4002e872c0) Stream removed, broadcasting: 1
I0816 21:15:56.001988       7 log.go:172] (0x40029122c0) (0x4001411400) Stream removed, broadcasting: 3
I0816 21:15:56.002032       7 log.go:172] (0x40029122c0) (0x4002e87360) Stream removed, broadcasting: 5
Aug 16 21:15:56.002: INFO: Waiting for responses: map[]
Aug 16 21:15:56.006: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:8080/dial?request=hostname&protocol=udp&host=10.244.1.252&port=8081&tries=1'] Namespace:pod-network-test-8206 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:15:56.006: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:15:56.062332       7 log.go:172] (0x40032860b0) (0x4002d294a0) Create stream
I0816 21:15:56.062483       7 log.go:172] (0x40032860b0) (0x4002d294a0) Stream added, broadcasting: 1
I0816 21:15:56.065890       7 log.go:172] (0x40032860b0) Reply frame received for 1
I0816 21:15:56.066094       7 log.go:172] (0x40032860b0) (0x4002d29540) Create stream
I0816 21:15:56.066202       7 log.go:172] (0x40032860b0) (0x4002d29540) Stream added, broadcasting: 3
I0816 21:15:56.067706       7 log.go:172] (0x40032860b0) Reply frame received for 3
I0816 21:15:56.067819       7 log.go:172] (0x40032860b0) (0x4002d295e0) Create stream
I0816 21:15:56.067877       7 log.go:172] (0x40032860b0) (0x4002d295e0) Stream added, broadcasting: 5
I0816 21:15:56.068999       7 log.go:172] (0x40032860b0) Reply frame received for 5
I0816 21:15:56.131332       7 log.go:172] (0x40032860b0) Data frame received for 3
I0816 21:15:56.131477       7 log.go:172] (0x4002d29540) (3) Data frame handling
I0816 21:15:56.131562       7 log.go:172] (0x4002d29540) (3) Data frame sent
I0816 21:15:56.131635       7 log.go:172] (0x40032860b0) Data frame received for 3
I0816 21:15:56.131697       7 log.go:172] (0x4002d29540) (3) Data frame handling
I0816 21:15:56.131864       7 log.go:172] (0x40032860b0) Data frame received for 5
I0816 21:15:56.132180       7 log.go:172] (0x4002d295e0) (5) Data frame handling
I0816 21:15:56.133580       7 log.go:172] (0x40032860b0) Data frame received for 1
I0816 21:15:56.133702       7 log.go:172] (0x4002d294a0) (1) Data frame handling
I0816 21:15:56.133826       7 log.go:172] (0x4002d294a0) (1) Data frame sent
I0816 21:15:56.133945       7 log.go:172] (0x40032860b0) (0x4002d294a0) Stream removed, broadcasting: 1
I0816 21:15:56.134104       7 log.go:172] (0x40032860b0) Go away received
I0816 21:15:56.134443       7 log.go:172] (0x40032860b0) (0x4002d294a0) Stream removed, broadcasting: 1
I0816 21:15:56.134564       7 log.go:172] (0x40032860b0) (0x4002d29540) Stream removed, broadcasting: 3
I0816 21:15:56.134672       7 log.go:172] (0x40032860b0) (0x4002d295e0) Stream removed, broadcasting: 5
Aug 16 21:15:56.134: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:15:56.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8206" for this suite.

• [SLOW TEST:27.355 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3025,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:15:56.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 16 21:15:56.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:17:49.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7280" for this suite.

• [SLOW TEST:114.172 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":179,"skipped":3049,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:17:50.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:17:50.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9968'
Aug 16 21:17:52.668: INFO: stderr: ""
Aug 16 21:17:52.668: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 16 21:17:52.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9968'
Aug 16 21:17:54.733: INFO: stderr: ""
Aug 16 21:17:54.733: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 16 21:17:55.742: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:17:55.742: INFO: Found 0 / 1
Aug 16 21:17:56.741: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:17:56.741: INFO: Found 1 / 1
Aug 16 21:17:56.741: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 16 21:17:56.746: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:17:56.747: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 16 21:17:56.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-75vb2 --namespace=kubectl-9968'
Aug 16 21:17:58.140: INFO: stderr: ""
Aug 16 21:17:58.140: INFO: stdout: "Name:         agnhost-master-75vb2\nNamespace:    kubectl-9968\nPriority:     0\nNode:         jerma-worker2/172.18.0.3\nStart Time:   Sun, 16 Aug 2020 21:17:52 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.254\nIPs:\n  IP:           10.244.1.254\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://ae5f12c9db7b756d096a35980b4eef65935c96f9b5421b02b0cf5862fa682985\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 16 Aug 2020 21:17:55 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gvm4p (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-gvm4p:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-gvm4p\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  6s    default-scheduler       Successfully assigned kubectl-9968/agnhost-master-75vb2 to jerma-worker2\n  Normal  Pulled     5s    kubelet, jerma-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    3s    kubelet, jerma-worker2  Created container agnhost-master\n  Normal  Started    3s    kubelet, jerma-worker2  Started container agnhost-master\n"
Aug 16 21:17:58.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9968'
Aug 16 21:17:59.532: INFO: stderr: ""
Aug 16 21:17:59.532: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9968\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-75vb2\n"
Aug 16 21:17:59.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9968'
Aug 16 21:18:00.841: INFO: stderr: ""
Aug 16 21:18:00.841: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9968\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.99.193.18\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.254:6379\nSession Affinity:  None\nEvents:            \n"
Aug 16 21:18:00.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Aug 16 21:18:02.699: INFO: stderr: ""
Aug 16 21:18:02.699: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:37:06 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Sun, 16 Aug 2020 21:17:59 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sun, 16 Aug 2020 21:13:51 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sun, 16 Aug 2020 21:13:51 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sun, 16 Aug 2020 21:13:51 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sun, 16 Aug 2020 21:13:51 +0000   Sat, 15 Aug 2020 09:37:40 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.10\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 e52c45bc589d48d995e8fd79ad5bf250\n  System UUID:                b981bdc7-d264-48ef-ab5e-3308e23aaf13\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-bvrm4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     35h\n  kube-system                 coredns-6955765f44-db8rh                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     35h\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         35h\n  kube-system                 kindnet-j88mt                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      35h\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         35h\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         35h\n  kube-system                 kube-proxy-hmb6l                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         35h\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         35h\n  local-path-storage          local-path-provisioner-58f6947c7-p2cqw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         35h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 16 21:18:02.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9968'
Aug 16 21:18:04.017: INFO: stderr: ""
Aug 16 21:18:04.017: INFO: stdout: "Name:         kubectl-9968\nLabels:       e2e-framework=kubectl\n              e2e-run=e5ea7438-204b-4c86-a5be-a155722d35c4\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:18:04.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9968" for this suite.

• [SLOW TEST:13.622 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":180,"skipped":3067,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:18:04.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 16 21:18:04.119: INFO: Waiting up to 5m0s for pod "pod-3e83d45e-e9f7-4471-8eeb-80674cfeb2f0" in namespace "emptydir-7073" to be "success or failure"
Aug 16 21:18:04.139: INFO: Pod "pod-3e83d45e-e9f7-4471-8eeb-80674cfeb2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.590827ms
Aug 16 21:18:06.144: INFO: Pod "pod-3e83d45e-e9f7-4471-8eeb-80674cfeb2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024485502s
Aug 16 21:18:08.151: INFO: Pod "pod-3e83d45e-e9f7-4471-8eeb-80674cfeb2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031417281s
STEP: Saw pod success
Aug 16 21:18:08.151: INFO: Pod "pod-3e83d45e-e9f7-4471-8eeb-80674cfeb2f0" satisfied condition "success or failure"
Aug 16 21:18:08.156: INFO: Trying to get logs from node jerma-worker2 pod pod-3e83d45e-e9f7-4471-8eeb-80674cfeb2f0 container test-container: 
STEP: delete the pod
Aug 16 21:18:08.218: INFO: Waiting for pod pod-3e83d45e-e9f7-4471-8eeb-80674cfeb2f0 to disappear
Aug 16 21:18:08.222: INFO: Pod pod-3e83d45e-e9f7-4471-8eeb-80674cfeb2f0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:18:08.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7073" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3122,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:18:08.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 16 21:18:08.293: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:18:22.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7481" for this suite.

• [SLOW TEST:14.239 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":182,"skipped":3139,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:18:22.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-1666
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-1666
STEP: creating replication controller externalsvc in namespace services-1666
I0816 21:18:24.187738       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1666, replica count: 2
I0816 21:18:27.239039       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0816 21:18:30.239757       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 16 21:18:30.315: INFO: Creating new exec pod
Aug 16 21:18:34.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1666 execpod9hvrn -- /bin/sh -x -c nslookup nodeport-service'
Aug 16 21:18:36.278: INFO: stderr: "I0816 21:18:36.139451    3018 log.go:172] (0x4000a8cbb0) (0x40007201e0) Create stream\nI0816 21:18:36.141759    3018 log.go:172] (0x4000a8cbb0) (0x40007201e0) Stream added, broadcasting: 1\nI0816 21:18:36.154143    3018 log.go:172] (0x4000a8cbb0) Reply frame received for 1\nI0816 21:18:36.154711    3018 log.go:172] (0x4000a8cbb0) (0x4000720280) Create stream\nI0816 21:18:36.154763    3018 log.go:172] (0x4000a8cbb0) (0x4000720280) Stream added, broadcasting: 3\nI0816 21:18:36.156235    3018 log.go:172] (0x4000a8cbb0) Reply frame received for 3\nI0816 21:18:36.156859    3018 log.go:172] (0x4000a8cbb0) (0x40007f2000) Create stream\nI0816 21:18:36.156989    3018 log.go:172] (0x4000a8cbb0) (0x40007f2000) Stream added, broadcasting: 5\nI0816 21:18:36.158454    3018 log.go:172] (0x4000a8cbb0) Reply frame received for 5\nI0816 21:18:36.251699    3018 log.go:172] (0x4000a8cbb0) Data frame received for 5\nI0816 21:18:36.252051    3018 log.go:172] (0x40007f2000) (5) Data frame handling\nI0816 21:18:36.253023    3018 log.go:172] (0x40007f2000) (5) Data frame sent\n+ nslookup nodeport-service\nI0816 21:18:36.255567    3018 log.go:172] (0x4000a8cbb0) Data frame received for 3\nI0816 21:18:36.255703    3018 log.go:172] (0x4000720280) (3) Data frame handling\nI0816 21:18:36.255808    3018 log.go:172] (0x4000720280) (3) Data frame sent\nI0816 21:18:36.256613    3018 log.go:172] (0x4000a8cbb0) Data frame received for 3\nI0816 21:18:36.256780    3018 log.go:172] (0x4000720280) (3) Data frame handling\nI0816 21:18:36.256937    3018 log.go:172] (0x4000720280) (3) Data frame sent\nI0816 21:18:36.258381    3018 log.go:172] (0x4000a8cbb0) Data frame received for 5\nI0816 21:18:36.258485    3018 log.go:172] (0x40007f2000) (5) Data frame handling\nI0816 21:18:36.258907    3018 log.go:172] (0x4000a8cbb0) Data frame received for 3\nI0816 21:18:36.259083    3018 log.go:172] (0x4000720280) (3) Data frame handling\nI0816 21:18:36.260399    3018 log.go:172] (0x4000a8cbb0) Data frame received for 1\nI0816 21:18:36.260500    3018 log.go:172] (0x40007201e0) (1) Data frame handling\nI0816 21:18:36.260585    3018 log.go:172] (0x40007201e0) (1) Data frame sent\nI0816 21:18:36.261593    3018 log.go:172] (0x4000a8cbb0) (0x40007201e0) Stream removed, broadcasting: 1\nI0816 21:18:36.264933    3018 log.go:172] (0x4000a8cbb0) Go away received\nI0816 21:18:36.267677    3018 log.go:172] (0x4000a8cbb0) (0x40007201e0) Stream removed, broadcasting: 1\nI0816 21:18:36.268404    3018 log.go:172] (0x4000a8cbb0) (0x4000720280) Stream removed, broadcasting: 3\nI0816 21:18:36.268684    3018 log.go:172] (0x4000a8cbb0) (0x40007f2000) Stream removed, broadcasting: 5\n"
Aug 16 21:18:36.280: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1666.svc.cluster.local\tcanonical name = externalsvc.services-1666.svc.cluster.local.\nName:\texternalsvc.services-1666.svc.cluster.local\nAddress: 10.101.154.189\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-1666, will wait for the garbage collector to delete the pods
Aug 16 21:18:36.345: INFO: Deleting ReplicationController externalsvc took: 8.371589ms
Aug 16 21:18:36.646: INFO: Terminating ReplicationController externalsvc pods took: 300.86051ms
Aug 16 21:18:51.802: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:18:51.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1666" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:29.357 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":183,"skipped":3153,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:18:51.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 16 21:18:51.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921" in namespace "projected-3868" to be "success or failure"
Aug 16 21:18:51.969: INFO: Pod "downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921": Phase="Pending", Reason="", readiness=false. Elapsed: 38.027039ms
Aug 16 21:18:53.975: INFO: Pod "downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044275282s
Aug 16 21:18:55.981: INFO: Pod "downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049736116s
Aug 16 21:18:58.152: INFO: Pod "downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220677049s
Aug 16 21:19:00.172: INFO: Pod "downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.241061462s
STEP: Saw pod success
Aug 16 21:19:00.172: INFO: Pod "downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921" satisfied condition "success or failure"
Aug 16 21:19:00.176: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921 container client-container: 
STEP: delete the pod
Aug 16 21:19:00.233: INFO: Waiting for pod downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921 to disappear
Aug 16 21:19:00.237: INFO: Pod downwardapi-volume-5e52e29a-8808-4199-85c0-714d4a989921 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:19:00.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3868" for this suite.

• [SLOW TEST:8.417 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3162,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:19:00.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:19:04.927: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 16 21:19:07.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209544, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209544, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209545, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209544, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:19:09.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209544, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209544, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209545, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209544, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:19:12.162: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 16 21:19:12.189: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:19:12.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2960" for this suite.
STEP: Destroying namespace "webhook-2960-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.162 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":185,"skipped":3201,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:19:12.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Aug 16 21:19:12.524: INFO: Waiting up to 5m0s for pod "var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd" in namespace "var-expansion-9777" to be "success or failure"
Aug 16 21:19:12.530: INFO: Pod "var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.988124ms
Aug 16 21:19:14.534: INFO: Pod "var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010064297s
Aug 16 21:19:16.541: INFO: Pod "var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016422716s
Aug 16 21:19:18.546: INFO: Pod "var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022269403s
STEP: Saw pod success
Aug 16 21:19:18.547: INFO: Pod "var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd" satisfied condition "success or failure"
Aug 16 21:19:18.551: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd container dapi-container: 
STEP: delete the pod
Aug 16 21:19:18.596: INFO: Waiting for pod var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd to disappear
Aug 16 21:19:18.607: INFO: Pod var-expansion-9ed462c3-788f-4f55-b2ad-dea530ea19cd no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:19:18.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9777" for this suite.

• [SLOW TEST:6.202 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3203,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:19:18.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 16 21:19:18.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-942'
Aug 16 21:19:20.038: INFO: stderr: ""
Aug 16 21:19:20.038: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765
Aug 16 21:19:20.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-942'
Aug 16 21:19:31.590: INFO: stderr: ""
Aug 16 21:19:31.590: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:19:31.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-942" for this suite.

• [SLOW TEST:12.979 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":187,"skipped":3220,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:19:31.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 16 21:19:31.720: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1326 /api/v1/namespaces/watch-1326/configmaps/e2e-watch-test-label-changed abaef79c-0b4e-431f-b190-d45ddd81928f 505810 0 2020-08-16 21:19:31 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 16 21:19:31.721: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1326 /api/v1/namespaces/watch-1326/configmaps/e2e-watch-test-label-changed abaef79c-0b4e-431f-b190-d45ddd81928f 505811 0 2020-08-16 21:19:31 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 16 21:19:31.722: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1326 /api/v1/namespaces/watch-1326/configmaps/e2e-watch-test-label-changed abaef79c-0b4e-431f-b190-d45ddd81928f 505812 0 2020-08-16 21:19:31 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 16 21:19:42.071: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1326 /api/v1/namespaces/watch-1326/configmaps/e2e-watch-test-label-changed abaef79c-0b4e-431f-b190-d45ddd81928f 505851 0 2020-08-16 21:19:31 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 16 21:19:42.072: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1326 /api/v1/namespaces/watch-1326/configmaps/e2e-watch-test-label-changed abaef79c-0b4e-431f-b190-d45ddd81928f 505852 0 2020-08-16 21:19:31 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 16 21:19:42.073: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1326 /api/v1/namespaces/watch-1326/configmaps/e2e-watch-test-label-changed abaef79c-0b4e-431f-b190-d45ddd81928f 505853 0 2020-08-16 21:19:31 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:19:42.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1326" for this suite.

• [SLOW TEST:10.544 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":188,"skipped":3221,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:19:42.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:19:46.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8339" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3278,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:19:46.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:19:46.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8810" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":190,"skipped":3287,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:19:46.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 16 21:19:55.134: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9164 PodName:pod-sharedvolume-ca18f3f4-de90-4457-aafd-a41c356b289d ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:19:55.135: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:19:55.194511       7 log.go:172] (0x4003286420) (0x4002dd19a0) Create stream
I0816 21:19:55.194733       7 log.go:172] (0x4003286420) (0x4002dd19a0) Stream added, broadcasting: 1
I0816 21:19:55.198131       7 log.go:172] (0x4003286420) Reply frame received for 1
I0816 21:19:55.198267       7 log.go:172] (0x4003286420) (0x4002e87680) Create stream
I0816 21:19:55.198327       7 log.go:172] (0x4003286420) (0x4002e87680) Stream added, broadcasting: 3
I0816 21:19:55.199418       7 log.go:172] (0x4003286420) Reply frame received for 3
I0816 21:19:55.199544       7 log.go:172] (0x4003286420) (0x4002dd1a40) Create stream
I0816 21:19:55.199616       7 log.go:172] (0x4003286420) (0x4002dd1a40) Stream added, broadcasting: 5
I0816 21:19:55.200948       7 log.go:172] (0x4003286420) Reply frame received for 5
I0816 21:19:55.257631       7 log.go:172] (0x4003286420) Data frame received for 3
I0816 21:19:55.257831       7 log.go:172] (0x4002e87680) (3) Data frame handling
I0816 21:19:55.257984       7 log.go:172] (0x4002e87680) (3) Data frame sent
I0816 21:19:55.258089       7 log.go:172] (0x4003286420) Data frame received for 3
I0816 21:19:55.258194       7 log.go:172] (0x4003286420) Data frame received for 5
I0816 21:19:55.258337       7 log.go:172] (0x4002dd1a40) (5) Data frame handling
I0816 21:19:55.258483       7 log.go:172] (0x4002e87680) (3) Data frame handling
I0816 21:19:55.259163       7 log.go:172] (0x4003286420) Data frame received for 1
I0816 21:19:55.259289       7 log.go:172] (0x4002dd19a0) (1) Data frame handling
I0816 21:19:55.259431       7 log.go:172] (0x4002dd19a0) (1) Data frame sent
I0816 21:19:55.259597       7 log.go:172] (0x4003286420) (0x4002dd19a0) Stream removed, broadcasting: 1
I0816 21:19:55.259722       7 log.go:172] (0x4003286420) Go away received
I0816 21:19:55.260052       7 log.go:172] (0x4003286420) (0x4002dd19a0) Stream removed, broadcasting: 1
I0816 21:19:55.260150       7 log.go:172] (0x4003286420) (0x4002e87680) Stream removed, broadcasting: 3
I0816 21:19:55.260229       7 log.go:172] (0x4003286420) (0x4002dd1a40) Stream removed, broadcasting: 5
Aug 16 21:19:55.260: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:19:55.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9164" for this suite.

• [SLOW TEST:8.620 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":191,"skipped":3294,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:19:55.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 16 21:19:55.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5179'
Aug 16 21:19:56.976: INFO: stderr: ""
Aug 16 21:19:56.977: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 16 21:19:56.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5179'
Aug 16 21:19:58.262: INFO: stderr: ""
Aug 16 21:19:58.262: INFO: stdout: "update-demo-nautilus-69kpg update-demo-nautilus-dkcpb "
Aug 16 21:19:58.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-69kpg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:19:59.666: INFO: stderr: ""
Aug 16 21:19:59.666: INFO: stdout: ""
Aug 16 21:19:59.666: INFO: update-demo-nautilus-69kpg is created but not running
Aug 16 21:20:04.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5179'
Aug 16 21:20:05.950: INFO: stderr: ""
Aug 16 21:20:05.950: INFO: stdout: "update-demo-nautilus-69kpg update-demo-nautilus-dkcpb "
Aug 16 21:20:05.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-69kpg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:20:07.180: INFO: stderr: ""
Aug 16 21:20:07.180: INFO: stdout: "true"
Aug 16 21:20:07.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-69kpg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:20:08.407: INFO: stderr: ""
Aug 16 21:20:08.407: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 16 21:20:08.407: INFO: validating pod update-demo-nautilus-69kpg
Aug 16 21:20:08.413: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 16 21:20:08.413: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 16 21:20:08.413: INFO: update-demo-nautilus-69kpg is verified up and running
Aug 16 21:20:08.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkcpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:20:09.673: INFO: stderr: ""
Aug 16 21:20:09.673: INFO: stdout: "true"
Aug 16 21:20:09.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkcpb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:20:10.903: INFO: stderr: ""
Aug 16 21:20:10.903: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 16 21:20:10.904: INFO: validating pod update-demo-nautilus-dkcpb
Aug 16 21:20:10.909: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 16 21:20:10.909: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 16 21:20:10.909: INFO: update-demo-nautilus-dkcpb is verified up and running
STEP: rolling-update to new replication controller
Aug 16 21:20:10.919: INFO: scanned /root for discovery docs: 
Aug 16 21:20:10.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5179'
Aug 16 21:20:36.082: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 16 21:20:36.082: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 16 21:20:36.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5179'
Aug 16 21:20:37.352: INFO: stderr: ""
Aug 16 21:20:37.352: INFO: stdout: "update-demo-kitten-qqc5c update-demo-kitten-whvjd "
Aug 16 21:20:37.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qqc5c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:20:38.569: INFO: stderr: ""
Aug 16 21:20:38.570: INFO: stdout: "true"
Aug 16 21:20:38.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qqc5c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:20:39.821: INFO: stderr: ""
Aug 16 21:20:39.821: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 16 21:20:39.821: INFO: validating pod update-demo-kitten-qqc5c
Aug 16 21:20:39.827: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 16 21:20:39.827: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 16 21:20:39.827: INFO: update-demo-kitten-qqc5c is verified up and running
Aug 16 21:20:39.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-whvjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:20:41.081: INFO: stderr: ""
Aug 16 21:20:41.081: INFO: stdout: "true"
Aug 16 21:20:41.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-whvjd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5179'
Aug 16 21:20:42.370: INFO: stderr: ""
Aug 16 21:20:42.370: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 16 21:20:42.370: INFO: validating pod update-demo-kitten-whvjd
Aug 16 21:20:42.376: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 16 21:20:42.376: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 16 21:20:42.376: INFO: update-demo-kitten-whvjd is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:20:42.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5179" for this suite.

• [SLOW TEST:47.076 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":192,"skipped":3322,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:20:42.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-685f47ce-d2ae-4b29-bf5e-a8b9e8256896
STEP: Creating a pod to test consume secrets
Aug 16 21:20:42.483: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-62b13f7a-30fa-4cc6-bd6c-7b4de0225c36" in namespace "projected-2171" to be "success or failure"
Aug 16 21:20:42.518: INFO: Pod "pod-projected-secrets-62b13f7a-30fa-4cc6-bd6c-7b4de0225c36": Phase="Pending", Reason="", readiness=false. Elapsed: 34.718639ms
Aug 16 21:20:44.525: INFO: Pod "pod-projected-secrets-62b13f7a-30fa-4cc6-bd6c-7b4de0225c36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041101394s
Aug 16 21:20:46.532: INFO: Pod "pod-projected-secrets-62b13f7a-30fa-4cc6-bd6c-7b4de0225c36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048217925s
STEP: Saw pod success
Aug 16 21:20:46.532: INFO: Pod "pod-projected-secrets-62b13f7a-30fa-4cc6-bd6c-7b4de0225c36" satisfied condition "success or failure"
Aug 16 21:20:46.536: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-62b13f7a-30fa-4cc6-bd6c-7b4de0225c36 container projected-secret-volume-test: 
STEP: delete the pod
Aug 16 21:20:46.747: INFO: Waiting for pod pod-projected-secrets-62b13f7a-30fa-4cc6-bd6c-7b4de0225c36 to disappear
Aug 16 21:20:46.790: INFO: Pod pod-projected-secrets-62b13f7a-30fa-4cc6-bd6c-7b4de0225c36 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:20:46.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2171" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3322,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:20:46.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 16 21:20:46.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6983'
Aug 16 21:20:48.234: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 16 21:20:48.235: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Aug 16 21:20:52.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6983'
Aug 16 21:20:53.664: INFO: stderr: ""
Aug 16 21:20:53.664: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:20:53.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6983" for this suite.

• [SLOW TEST:6.868 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1625
    should create a deployment from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":194,"skipped":3332,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:20:53.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:20:53.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6020" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3336,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:20:53.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Aug 16 21:20:54.229: INFO: Waiting up to 5m0s for pod "client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8" in namespace "containers-6010" to be "success or failure"
Aug 16 21:20:54.375: INFO: Pod "client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 145.112384ms
Aug 16 21:20:56.526: INFO: Pod "client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29611587s
Aug 16 21:20:58.533: INFO: Pod "client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30322884s
Aug 16 21:21:00.540: INFO: Pod "client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.310835444s
STEP: Saw pod success
Aug 16 21:21:00.541: INFO: Pod "client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8" satisfied condition "success or failure"
Aug 16 21:21:00.545: INFO: Trying to get logs from node jerma-worker2 pod client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8 container test-container: 
STEP: delete the pod
Aug 16 21:21:00.591: INFO: Waiting for pod client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8 to disappear
Aug 16 21:21:00.602: INFO: Pod client-containers-68649f0f-9c18-4895-845f-0f6dc4e15ef8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:21:00.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6010" for this suite.

• [SLOW TEST:6.768 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3336,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:21:00.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:21:00.765: INFO: Create a RollingUpdate DaemonSet
Aug 16 21:21:00.771: INFO: Check that daemon pods launch on every node of the cluster
Aug 16 21:21:00.780: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:00.786: INFO: Number of nodes with available pods: 0
Aug 16 21:21:00.786: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:21:01.843: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:01.848: INFO: Number of nodes with available pods: 0
Aug 16 21:21:01.848: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:21:02.878: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:02.882: INFO: Number of nodes with available pods: 0
Aug 16 21:21:02.882: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:21:03.796: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:03.809: INFO: Number of nodes with available pods: 0
Aug 16 21:21:03.809: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:21:04.794: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:04.798: INFO: Number of nodes with available pods: 0
Aug 16 21:21:04.798: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:21:05.794: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:05.800: INFO: Number of nodes with available pods: 2
Aug 16 21:21:05.800: INFO: Number of running nodes: 2, number of available pods: 2
Aug 16 21:21:05.801: INFO: Update the DaemonSet to trigger a rollout
Aug 16 21:21:05.820: INFO: Updating DaemonSet daemon-set
Aug 16 21:21:10.906: INFO: Roll back the DaemonSet before rollout is complete
Aug 16 21:21:10.915: INFO: Updating DaemonSet daemon-set
Aug 16 21:21:10.915: INFO: Make sure DaemonSet rollback is complete
Aug 16 21:21:10.941: INFO: Wrong image for pod: daemon-set-f69vw. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 16 21:21:10.941: INFO: Pod daemon-set-f69vw is not available
Aug 16 21:21:10.961: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:11.968: INFO: Wrong image for pod: daemon-set-f69vw. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 16 21:21:11.968: INFO: Pod daemon-set-f69vw is not available
Aug 16 21:21:11.977: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:12.970: INFO: Wrong image for pod: daemon-set-f69vw. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 16 21:21:12.970: INFO: Pod daemon-set-f69vw is not available
Aug 16 21:21:12.978: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:14.250: INFO: Wrong image for pod: daemon-set-f69vw. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 16 21:21:14.250: INFO: Pod daemon-set-f69vw is not available
Aug 16 21:21:14.257: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:14.970: INFO: Wrong image for pod: daemon-set-f69vw. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 16 21:21:14.971: INFO: Pod daemon-set-f69vw is not available
Aug 16 21:21:14.980: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:21:15.969: INFO: Pod daemon-set-qxdjl is not available
Aug 16 21:21:15.977: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6505, will wait for the garbage collector to delete the pods
Aug 16 21:21:16.048: INFO: Deleting DaemonSet.extensions daemon-set took: 6.693797ms
Aug 16 21:21:16.349: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.693259ms
Aug 16 21:21:21.753: INFO: Number of nodes with available pods: 0
Aug 16 21:21:21.753: INFO: Number of running nodes: 0, number of available pods: 0
Aug 16 21:21:21.757: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6505/daemonsets","resourceVersion":"506518"},"items":null}

Aug 16 21:21:21.760: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6505/pods","resourceVersion":"506518"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:21:21.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6505" for this suite.

• [SLOW TEST:21.168 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":197,"skipped":3338,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:21:21.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:21:26.374: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 16 21:21:28.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209686, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209686, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209686, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209686, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:21:30.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209686, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209686, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209686, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209686, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:21:33.417: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:21:33.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6634-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:21:34.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1855" for this suite.
STEP: Destroying namespace "webhook-1855-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.137 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":198,"skipped":3346,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:21:34.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:21:38.033: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 16 21:21:40.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209698, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209698, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209698, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209698, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:21:43.157: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:21:43.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:21:44.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-515" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:9.870 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":199,"skipped":3361,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:21:44.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 16 21:21:44.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5211'
Aug 16 21:21:46.314: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 16 21:21:46.314: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 16 21:21:46.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5211'
Aug 16 21:21:48.115: INFO: stderr: ""
Aug 16 21:21:48.116: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:21:48.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5211" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":200,"skipped":3374,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:21:48.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-8636
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8636 to expose endpoints map[]
Aug 16 21:21:48.454: INFO: successfully validated that service multi-endpoint-test in namespace services-8636 exposes endpoints map[] (34.965576ms elapsed)
STEP: Creating pod pod1 in namespace services-8636
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8636 to expose endpoints map[pod1:[100]]
Aug 16 21:21:52.844: INFO: successfully validated that service multi-endpoint-test in namespace services-8636 exposes endpoints map[pod1:[100]] (4.355147592s elapsed)
STEP: Creating pod pod2 in namespace services-8636
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8636 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 16 21:21:57.043: INFO: successfully validated that service multi-endpoint-test in namespace services-8636 exposes endpoints map[pod1:[100] pod2:[101]] (4.192222701s elapsed)
STEP: Deleting pod pod1 in namespace services-8636
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8636 to expose endpoints map[pod2:[101]]
Aug 16 21:21:57.092: INFO: successfully validated that service multi-endpoint-test in namespace services-8636 exposes endpoints map[pod2:[101]] (43.794512ms elapsed)
STEP: Deleting pod pod2 in namespace services-8636
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8636 to expose endpoints map[]
Aug 16 21:21:57.116: INFO: successfully validated that service multi-endpoint-test in namespace services-8636 exposes endpoints map[] (17.435732ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:21:57.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8636" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:9.444 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":201,"skipped":3410,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:21:57.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2632.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2632.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 16 21:22:07.659: INFO: DNS probes using dns-test-e05d13b6-0c7f-4e6a-88de-86b5a975ac5b succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2632.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2632.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 16 21:22:14.448: INFO: File wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:14.451: INFO: File jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:14.451: INFO: Lookups using dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee failed for: [wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local]

Aug 16 21:22:19.456: INFO: File wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:19.460: INFO: File jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:19.460: INFO: Lookups using dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee failed for: [wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local]

Aug 16 21:22:24.458: INFO: File wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:24.463: INFO: File jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:24.463: INFO: Lookups using dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee failed for: [wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local]

Aug 16 21:22:29.526: INFO: File wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:29.532: INFO: File jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:29.532: INFO: Lookups using dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee failed for: [wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local]

Aug 16 21:22:34.458: INFO: File wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:34.462: INFO: File jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local from pod  dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 16 21:22:34.462: INFO: Lookups using dns-2632/dns-test-0783c419-6123-4379-a2ae-94bded6a96ee failed for: [wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local]

Aug 16 21:22:39.502: INFO: DNS probes using dns-test-0783c419-6123-4379-a2ae-94bded6a96ee succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2632.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2632.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2632.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2632.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 16 21:22:50.553: INFO: DNS probes using dns-test-aa0730ba-ddaf-466b-a3e6-7b8b6345bdbf succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:22:50.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2632" for this suite.

• [SLOW TEST:53.189 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":202,"skipped":3415,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:22:50.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 16 21:22:51.177: INFO: Waiting up to 5m0s for pod "downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5" in namespace "downward-api-8265" to be "success or failure"
Aug 16 21:22:51.239: INFO: Pod "downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 62.425719ms
Aug 16 21:22:53.247: INFO: Pod "downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06969119s
Aug 16 21:22:55.507: INFO: Pod "downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329942556s
Aug 16 21:22:57.645: INFO: Pod "downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.468245137s
STEP: Saw pod success
Aug 16 21:22:57.645: INFO: Pod "downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5" satisfied condition "success or failure"
Aug 16 21:22:57.677: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5 container dapi-container: 
STEP: delete the pod
Aug 16 21:22:58.501: INFO: Waiting for pod downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5 to disappear
Aug 16 21:22:58.586: INFO: Pod downward-api-3a84c720-5763-4576-a7ed-0cba2e795ed5 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:22:58.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8265" for this suite.

• [SLOW TEST:8.055 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3438,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:22:58.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:23:01.676: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 16 21:23:04.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209781, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209781, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209781, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209781, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:23:06.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209781, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209781, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209781, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209781, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:23:09.799: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:23:09.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5185" for this suite.
STEP: Destroying namespace "webhook-5185-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.074 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":204,"skipped":3450,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:23:09.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 16 21:23:10.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2013'
Aug 16 21:23:11.304: INFO: stderr: ""
Aug 16 21:23:11.305: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 16 21:23:16.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2013 -o json'
Aug 16 21:23:17.585: INFO: stderr: ""
Aug 16 21:23:17.585: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-16T21:23:11Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2013\",\n        \"resourceVersion\": \"507279\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2013/pods/e2e-test-httpd-pod\",\n        \"uid\": \"9cb90f4c-dd07-488c-8c24-dfb429ee4515\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-hqshd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-hqshd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-hqshd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-16T21:23:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-16T21:23:14Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-16T21:23:14Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-16T21:23:11Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://604a19b93b7f6419935cef6993855bb48bf338e416f84437b4330ce5eab9b3a7\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-16T21:23:13Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.6\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.3\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.3\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-16T21:23:11Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 16 21:23:17.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2013'
Aug 16 21:23:19.434: INFO: stderr: ""
Aug 16 21:23:19.434: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Aug 16 21:23:19.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2013'
Aug 16 21:23:32.158: INFO: stderr: ""
Aug 16 21:23:32.158: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:23:32.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2013" for this suite.

• [SLOW TEST:22.245 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":205,"skipped":3456,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:23:32.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 16 21:23:32.387: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 16 21:23:32.474: INFO: Waiting for terminating namespaces to be deleted...
Aug 16 21:23:32.478: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 16 21:23:32.495: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:23:32.495: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 16 21:23:32.495: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:23:32.495: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 16 21:23:32.495: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 16 21:23:32.517: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:23:32.517: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 16 21:23:32.517: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:23:32.517: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8a5eea5b-702f-4ea6-ab47-818a9e9c9dfd 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-8a5eea5b-702f-4ea6-ab47-818a9e9c9dfd off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8a5eea5b-702f-4ea6-ab47-818a9e9c9dfd
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:23:54.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4641" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:22.587 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":206,"skipped":3476,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:23:54.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 16 21:23:54.866: INFO: Waiting up to 5m0s for pod "pod-63551f52-e030-4776-b7b6-da87e434f7de" in namespace "emptydir-1214" to be "success or failure"
Aug 16 21:23:54.876: INFO: Pod "pod-63551f52-e030-4776-b7b6-da87e434f7de": Phase="Pending", Reason="", readiness=false. Elapsed: 9.912496ms
Aug 16 21:23:56.900: INFO: Pod "pod-63551f52-e030-4776-b7b6-da87e434f7de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033787153s
Aug 16 21:23:58.907: INFO: Pod "pod-63551f52-e030-4776-b7b6-da87e434f7de": Phase="Running", Reason="", readiness=true. Elapsed: 4.040572114s
Aug 16 21:24:00.911: INFO: Pod "pod-63551f52-e030-4776-b7b6-da87e434f7de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04527241s
STEP: Saw pod success
Aug 16 21:24:00.912: INFO: Pod "pod-63551f52-e030-4776-b7b6-da87e434f7de" satisfied condition "success or failure"
Aug 16 21:24:00.915: INFO: Trying to get logs from node jerma-worker2 pod pod-63551f52-e030-4776-b7b6-da87e434f7de container test-container: 
STEP: delete the pod
Aug 16 21:24:01.022: INFO: Waiting for pod pod-63551f52-e030-4776-b7b6-da87e434f7de to disappear
Aug 16 21:24:01.025: INFO: Pod pod-63551f52-e030-4776-b7b6-da87e434f7de no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:24:01.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1214" for this suite.

• [SLOW TEST:6.234 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3483,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:24:01.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-f7306421-e267-4aaa-b246-0b0711bc0d0f
STEP: Creating a pod to test consume configMaps
Aug 16 21:24:01.367: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-67f9f23d-f526-4b56-ad83-5213ac4e5eef" in namespace "projected-2306" to be "success or failure"
Aug 16 21:24:01.471: INFO: Pod "pod-projected-configmaps-67f9f23d-f526-4b56-ad83-5213ac4e5eef": Phase="Pending", Reason="", readiness=false. Elapsed: 103.365497ms
Aug 16 21:24:03.729: INFO: Pod "pod-projected-configmaps-67f9f23d-f526-4b56-ad83-5213ac4e5eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361705843s
Aug 16 21:24:05.735: INFO: Pod "pod-projected-configmaps-67f9f23d-f526-4b56-ad83-5213ac4e5eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.367914836s
STEP: Saw pod success
Aug 16 21:24:05.736: INFO: Pod "pod-projected-configmaps-67f9f23d-f526-4b56-ad83-5213ac4e5eef" satisfied condition "success or failure"
Aug 16 21:24:05.740: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-67f9f23d-f526-4b56-ad83-5213ac4e5eef container projected-configmap-volume-test: 
STEP: delete the pod
Aug 16 21:24:05.788: INFO: Waiting for pod pod-projected-configmaps-67f9f23d-f526-4b56-ad83-5213ac4e5eef to disappear
Aug 16 21:24:05.797: INFO: Pod pod-projected-configmaps-67f9f23d-f526-4b56-ad83-5213ac4e5eef no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:24:05.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2306" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3526,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:24:05.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:24:17.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3889" for this suite.

• [SLOW TEST:11.679 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":209,"skipped":3555,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:24:17.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 16 21:24:17.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3092'
Aug 16 21:24:23.040: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 16 21:24:23.040: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Aug 16 21:24:23.258: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug 16 21:24:23.640: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug 16 21:24:23.655: INFO: scanned /root for discovery docs: 
Aug 16 21:24:23.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3092'
Aug 16 21:24:43.133: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 16 21:24:43.133: INFO: stdout: "Created e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200\nScaling up e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 16 21:24:43.134: INFO: stdout: "Created e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200\nScaling up e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 16 21:24:43.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-3092'
Aug 16 21:24:44.380: INFO: stderr: ""
Aug 16 21:24:44.380: INFO: stdout: "e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200-z9ld9 "
Aug 16 21:24:44.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200-z9ld9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3092'
Aug 16 21:24:45.625: INFO: stderr: ""
Aug 16 21:24:45.625: INFO: stdout: "true"
Aug 16 21:24:45.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200-z9ld9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3092'
Aug 16 21:24:46.863: INFO: stderr: ""
Aug 16 21:24:46.863: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 16 21:24:46.863: INFO: e2e-test-httpd-rc-b39229d956fb341e582c0c902eeba200-z9ld9 is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 16 21:24:46.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3092'
Aug 16 21:24:48.105: INFO: stderr: ""
Aug 16 21:24:48.105: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:24:48.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3092" for this suite.

• [SLOW TEST:30.625 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":210,"skipped":3565,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:24:48.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 16 21:24:48.168: INFO: namespace kubectl-9760
Aug 16 21:24:48.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9760'
Aug 16 21:24:49.789: INFO: stderr: ""
Aug 16 21:24:49.789: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 16 21:24:50.797: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:24:50.798: INFO: Found 0 / 1
Aug 16 21:24:51.988: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:24:51.988: INFO: Found 0 / 1
Aug 16 21:24:52.885: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:24:52.885: INFO: Found 0 / 1
Aug 16 21:24:54.000: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:24:54.000: INFO: Found 0 / 1
Aug 16 21:24:54.861: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:24:54.861: INFO: Found 1 / 1
Aug 16 21:24:54.861: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 16 21:24:54.872: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 16 21:24:54.873: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 16 21:24:54.873: INFO: wait on agnhost-master startup in kubectl-9760 
Aug 16 21:24:54.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-vdn9m agnhost-master --namespace=kubectl-9760'
Aug 16 21:24:56.587: INFO: stderr: ""
Aug 16 21:24:56.587: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 16 21:24:56.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9760'
Aug 16 21:24:58.179: INFO: stderr: ""
Aug 16 21:24:58.179: INFO: stdout: "service/rm2 exposed\n"
Aug 16 21:24:58.304: INFO: Service rm2 in namespace kubectl-9760 found.
STEP: exposing service
Aug 16 21:25:00.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9760'
Aug 16 21:25:01.599: INFO: stderr: ""
Aug 16 21:25:01.599: INFO: stdout: "service/rm3 exposed\n"
Aug 16 21:25:01.614: INFO: Service rm3 in namespace kubectl-9760 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:25:03.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9760" for this suite.

• [SLOW TEST:15.520 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":211,"skipped":3567,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:25:03.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1768.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1768.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 16 21:25:11.937: INFO: DNS probes using dns-1768/dns-test-1fc2d71b-f2f3-4c34-ab5a-36d7e5be4aa6 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:25:11.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1768" for this suite.

• [SLOW TEST:8.403 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":212,"skipped":3567,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:25:12.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-9d5d6413-669a-4be0-b5f2-efb749b0bab6
STEP: Creating configMap with name cm-test-opt-upd-5aca3372-ccb0-4530-b3fc-1c5e1421b7f7
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9d5d6413-669a-4be0-b5f2-efb749b0bab6
STEP: Updating configmap cm-test-opt-upd-5aca3372-ccb0-4530-b3fc-1c5e1421b7f7
STEP: Creating configMap with name cm-test-opt-create-11a7df9c-5392-4480-b942-916c9bbb7c08
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:25:28.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2700" for this suite.

• [SLOW TEST:16.102 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3673,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:25:28.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5490.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5490.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 24.188.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.188.24_udp@PTR;check="$$(dig +tcp +noall +answer +search 24.188.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.188.24_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5490.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5490.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 24.188.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.188.24_udp@PTR;check="$$(dig +tcp +noall +answer +search 24.188.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.188.24_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 16 21:25:38.660: INFO: Unable to read wheezy_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:38.663: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:38.667: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:38.670: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:38.693: INFO: Unable to read jessie_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:38.772: INFO: Unable to read jessie_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:38.776: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:38.779: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:38.797: INFO: Lookups using dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f failed for: [wheezy_udp@dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_udp@dns-test-service.dns-5490.svc.cluster.local jessie_tcp@dns-test-service.dns-5490.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local]

Aug 16 21:25:44.754: INFO: Unable to read wheezy_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:45.119: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:45.210: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:45.340: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:45.421: INFO: Unable to read jessie_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:45.426: INFO: Unable to read jessie_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:45.431: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:45.436: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:45.786: INFO: Lookups using dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f failed for: [wheezy_udp@dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_udp@dns-test-service.dns-5490.svc.cluster.local jessie_tcp@dns-test-service.dns-5490.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local]

Aug 16 21:25:48.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:48.832: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:48.843: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:48.849: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:48.875: INFO: Unable to read jessie_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:48.879: INFO: Unable to read jessie_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:48.885: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:48.889: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:48.909: INFO: Lookups using dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f failed for: [wheezy_udp@dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_udp@dns-test-service.dns-5490.svc.cluster.local jessie_tcp@dns-test-service.dns-5490.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local]

Aug 16 21:25:54.047: INFO: Unable to read wheezy_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:54.442: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:54.479: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:54.483: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:54.510: INFO: Unable to read jessie_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:54.515: INFO: Unable to read jessie_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:54.519: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:54.522: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:54.954: INFO: Lookups using dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f failed for: [wheezy_udp@dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_udp@dns-test-service.dns-5490.svc.cluster.local jessie_tcp@dns-test-service.dns-5490.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local]

Aug 16 21:25:59.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:59.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:59.084: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:59.089: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:59.412: INFO: Unable to read jessie_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:59.419: INFO: Unable to read jessie_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:59.422: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:59.425: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:25:59.441: INFO: Lookups using dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f failed for: [wheezy_udp@dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_udp@dns-test-service.dns-5490.svc.cluster.local jessie_tcp@dns-test-service.dns-5490.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local]

Aug 16 21:26:04.402: INFO: Unable to read wheezy_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:26:04.885: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:26:05.166: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:26:05.170: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:26:05.188: INFO: Unable to read jessie_udp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:26:05.191: INFO: Unable to read jessie_tcp@dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:26:05.194: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:26:05.197: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local from pod dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f: the server could not find the requested resource (get pods dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f)
Aug 16 21:26:05.220: INFO: Lookups using dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f failed for: [wheezy_udp@dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@dns-test-service.dns-5490.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_udp@dns-test-service.dns-5490.svc.cluster.local jessie_tcp@dns-test-service.dns-5490.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5490.svc.cluster.local]

Aug 16 21:26:08.875: INFO: DNS probes using dns-5490/dns-test-bf5cda26-d2e7-4520-84ea-32764260f74f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:26:09.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5490" for this suite.

• [SLOW TEST:41.642 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":214,"skipped":3684,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:26:09.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-7be66f1e-fa81-44dc-bc7d-7464e65e3707
STEP: Creating secret with name secret-projected-all-test-volume-e351fdf1-97a1-4f7e-aa75-5570d601e239
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 16 21:26:10.079: INFO: Waiting up to 5m0s for pod "projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4" in namespace "projected-9519" to be "success or failure"
Aug 16 21:26:10.094: INFO: Pod "projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.413272ms
Aug 16 21:26:12.102: INFO: Pod "projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023009604s
Aug 16 21:26:14.109: INFO: Pod "projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029551998s
Aug 16 21:26:16.116: INFO: Pod "projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037101132s
STEP: Saw pod success
Aug 16 21:26:16.116: INFO: Pod "projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4" satisfied condition "success or failure"
Aug 16 21:26:16.122: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4 container projected-all-volume-test: 
STEP: delete the pod
Aug 16 21:26:16.180: INFO: Waiting for pod projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4 to disappear
Aug 16 21:26:16.388: INFO: Pod projected-volume-d6a9d9ef-320f-440b-898c-52623a8167e4 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:26:16.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9519" for this suite.

• [SLOW TEST:6.600 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3685,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:26:16.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:26:21.271: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 16 21:26:23.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209981, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209981, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209981, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733209981, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:26:26.417: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 16 21:26:32.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4973 to-be-attached-pod -i -c=container1'
Aug 16 21:26:33.822: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:26:33.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4973" for this suite.
STEP: Destroying namespace "webhook-4973-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.541 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":216,"skipped":3689,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:26:33.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 16 21:26:34.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f" in namespace "downward-api-8271" to be "success or failure"
Aug 16 21:26:34.438: INFO: Pod "downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f": Phase="Pending", Reason="", readiness=false. Elapsed: 243.902204ms
Aug 16 21:26:36.446: INFO: Pod "downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251693797s
Aug 16 21:26:38.452: INFO: Pod "downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258350987s
Aug 16 21:26:40.497: INFO: Pod "downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.302483433s
STEP: Saw pod success
Aug 16 21:26:40.497: INFO: Pod "downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f" satisfied condition "success or failure"
Aug 16 21:26:40.527: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f container client-container: 
STEP: delete the pod
Aug 16 21:26:40.571: INFO: Waiting for pod downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f to disappear
Aug 16 21:26:40.594: INFO: Pod downwardapi-volume-6efd1b35-3531-4283-9a80-23c04d2a435f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:26:40.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8271" for this suite.

• [SLOW TEST:7.131 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3691,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:26:41.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:26:44.728: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 16 21:26:47.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210004, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210004, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210004, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210004, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:26:49.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210004, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210004, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210004, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210004, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:26:53.027: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:26:53.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3310" for this suite.
STEP: Destroying namespace "webhook-3310-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.556 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":218,"skipped":3701,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:26:53.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6308
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 16 21:26:53.908: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 16 21:27:29.467: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.29:8080/dial?request=hostname&protocol=http&host=10.244.2.12&port=8080&tries=1'] Namespace:pod-network-test-6308 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:27:29.467: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:27:29.528226       7 log.go:172] (0x4002be2840) (0x4001faafa0) Create stream
I0816 21:27:29.528453       7 log.go:172] (0x4002be2840) (0x4001faafa0) Stream added, broadcasting: 1
I0816 21:27:29.532555       7 log.go:172] (0x4002be2840) Reply frame received for 1
I0816 21:27:29.532710       7 log.go:172] (0x4002be2840) (0x4001fab040) Create stream
I0816 21:27:29.532867       7 log.go:172] (0x4002be2840) (0x4001fab040) Stream added, broadcasting: 3
I0816 21:27:29.534186       7 log.go:172] (0x4002be2840) Reply frame received for 3
I0816 21:27:29.534321       7 log.go:172] (0x4002be2840) (0x4001f141e0) Create stream
I0816 21:27:29.534399       7 log.go:172] (0x4002be2840) (0x4001f141e0) Stream added, broadcasting: 5
I0816 21:27:29.535859       7 log.go:172] (0x4002be2840) Reply frame received for 5
I0816 21:27:29.649088       7 log.go:172] (0x4002be2840) Data frame received for 3
I0816 21:27:29.649295       7 log.go:172] (0x4001fab040) (3) Data frame handling
I0816 21:27:29.649460       7 log.go:172] (0x4001fab040) (3) Data frame sent
I0816 21:27:29.649992       7 log.go:172] (0x4002be2840) Data frame received for 3
I0816 21:27:29.650198       7 log.go:172] (0x4001fab040) (3) Data frame handling
I0816 21:27:29.650374       7 log.go:172] (0x4002be2840) Data frame received for 5
I0816 21:27:29.650551       7 log.go:172] (0x4001f141e0) (5) Data frame handling
I0816 21:27:29.652017       7 log.go:172] (0x4002be2840) Data frame received for 1
I0816 21:27:29.652180       7 log.go:172] (0x4001faafa0) (1) Data frame handling
I0816 21:27:29.652311       7 log.go:172] (0x4001faafa0) (1) Data frame sent
I0816 21:27:29.652432       7 log.go:172] (0x4002be2840) (0x4001faafa0) Stream removed, broadcasting: 1
I0816 21:27:29.652599       7 log.go:172] (0x4002be2840) Go away received
I0816 21:27:29.653115       7 log.go:172] (0x4002be2840) (0x4001faafa0) Stream removed, broadcasting: 1
I0816 21:27:29.653258       7 log.go:172] (0x4002be2840) (0x4001fab040) Stream removed, broadcasting: 3
I0816 21:27:29.653386       7 log.go:172] (0x4002be2840) (0x4001f141e0) Stream removed, broadcasting: 5
Aug 16 21:27:29.653: INFO: Waiting for responses: map[]
Aug 16 21:27:29.680: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.29:8080/dial?request=hostname&protocol=http&host=10.244.1.28&port=8080&tries=1'] Namespace:pod-network-test-6308 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:27:29.681: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:27:29.741161       7 log.go:172] (0x4002be3d90) (0x4001fab360) Create stream
I0816 21:27:29.741325       7 log.go:172] (0x4002be3d90) (0x4001fab360) Stream added, broadcasting: 1
I0816 21:27:29.745555       7 log.go:172] (0x4002be3d90) Reply frame received for 1
I0816 21:27:29.745760       7 log.go:172] (0x4002be3d90) (0x40009740a0) Create stream
I0816 21:27:29.745875       7 log.go:172] (0x4002be3d90) (0x40009740a0) Stream added, broadcasting: 3
I0816 21:27:29.747337       7 log.go:172] (0x4002be3d90) Reply frame received for 3
I0816 21:27:29.747472       7 log.go:172] (0x4002be3d90) (0x4001fab400) Create stream
I0816 21:27:29.747592       7 log.go:172] (0x4002be3d90) (0x4001fab400) Stream added, broadcasting: 5
I0816 21:27:29.748968       7 log.go:172] (0x4002be3d90) Reply frame received for 5
I0816 21:27:29.821554       7 log.go:172] (0x4002be3d90) Data frame received for 3
I0816 21:27:29.821672       7 log.go:172] (0x40009740a0) (3) Data frame handling
I0816 21:27:29.821791       7 log.go:172] (0x40009740a0) (3) Data frame sent
I0816 21:27:29.821905       7 log.go:172] (0x4002be3d90) Data frame received for 3
I0816 21:27:29.822021       7 log.go:172] (0x40009740a0) (3) Data frame handling
I0816 21:27:29.822129       7 log.go:172] (0x4002be3d90) Data frame received for 5
I0816 21:27:29.822225       7 log.go:172] (0x4001fab400) (5) Data frame handling
I0816 21:27:29.823580       7 log.go:172] (0x4002be3d90) Data frame received for 1
I0816 21:27:29.823679       7 log.go:172] (0x4001fab360) (1) Data frame handling
I0816 21:27:29.823761       7 log.go:172] (0x4001fab360) (1) Data frame sent
I0816 21:27:29.823841       7 log.go:172] (0x4002be3d90) (0x4001fab360) Stream removed, broadcasting: 1
I0816 21:27:29.823962       7 log.go:172] (0x4002be3d90) Go away received
I0816 21:27:29.824222       7 log.go:172] (0x4002be3d90) (0x4001fab360) Stream removed, broadcasting: 1
I0816 21:27:29.824339       7 log.go:172] (0x4002be3d90) (0x40009740a0) Stream removed, broadcasting: 3
I0816 21:27:29.824418       7 log.go:172] (0x4002be3d90) (0x4001fab400) Stream removed, broadcasting: 5
Aug 16 21:27:29.824: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:27:29.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6308" for this suite.

• [SLOW TEST:36.197 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3717,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:27:29.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-4b0eccf1-a222-415d-8fef-ae164b9cb49c in namespace container-probe-546
Aug 16 21:27:33.970: INFO: Started pod busybox-4b0eccf1-a222-415d-8fef-ae164b9cb49c in namespace container-probe-546
STEP: checking the pod's current state and verifying that restartCount is present
Aug 16 21:27:33.974: INFO: Initial restart count of pod busybox-4b0eccf1-a222-415d-8fef-ae164b9cb49c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:31:34.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-546" for this suite.

• [SLOW TEST:245.219 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3728,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:31:35.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 16 21:31:35.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 16 21:32:50.962: INFO: >>> kubeConfig: /root/.kube/config
Aug 16 21:33:09.976: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:34:17.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2324" for this suite.

• [SLOW TEST:162.912 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":221,"skipped":3798,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:34:17.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:34:31.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2562" for this suite.

• [SLOW TEST:14.014 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":222,"skipped":3825,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:34:31.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-52fb0d80-d841-4532-bc35-bf28a25994f2
STEP: Creating a pod to test consume secrets
Aug 16 21:34:32.087: INFO: Waiting up to 5m0s for pod "pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a" in namespace "secrets-1335" to be "success or failure"
Aug 16 21:34:32.106: INFO: Pod "pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.932343ms
Aug 16 21:34:34.758: INFO: Pod "pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670179126s
Aug 16 21:34:36.849: INFO: Pod "pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.761252718s
Aug 16 21:34:38.857: INFO: Pod "pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.769143827s
STEP: Saw pod success
Aug 16 21:34:38.857: INFO: Pod "pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a" satisfied condition "success or failure"
Aug 16 21:34:38.862: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a container secret-volume-test: 
STEP: delete the pod
Aug 16 21:34:38.891: INFO: Waiting for pod pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a to disappear
Aug 16 21:34:38.895: INFO: Pod pod-secrets-1205c6c5-11b4-47b3-96e2-0b85f90aac4a no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:34:38.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1335" for this suite.

• [SLOW TEST:6.914 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3837,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:34:38.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:34:42.818: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 16 21:34:45.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:34:47.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:34:49.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210482, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:34:52.837: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:34:52.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:34:54.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-409" for this suite.
STEP: Destroying namespace "webhook-409-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.216 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":224,"skipped":3839,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:34:55.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-cd338459-cc88-4348-938e-8ba199746a5b
STEP: Creating a pod to test consume configMaps
Aug 16 21:34:57.006: INFO: Waiting up to 5m0s for pod "pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d" in namespace "configmap-3478" to be "success or failure"
Aug 16 21:34:57.209: INFO: Pod "pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 202.923813ms
Aug 16 21:34:59.226: INFO: Pod "pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220000929s
Aug 16 21:35:01.232: INFO: Pod "pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225571669s
Aug 16 21:35:03.238: INFO: Pod "pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232293843s
STEP: Saw pod success
Aug 16 21:35:03.238: INFO: Pod "pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d" satisfied condition "success or failure"
Aug 16 21:35:03.242: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d container configmap-volume-test: 
STEP: delete the pod
Aug 16 21:35:03.443: INFO: Waiting for pod pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d to disappear
Aug 16 21:35:03.476: INFO: Pod pod-configmaps-3adc569c-4ad4-4599-b8b0-46ac33b20a2d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:35:03.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3478" for this suite.

• [SLOW TEST:8.363 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3862,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:35:03.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-1f6f77cf-4b90-473f-8f51-1f74b6fe42bc
STEP: Creating a pod to test consume configMaps
Aug 16 21:35:03.957: INFO: Waiting up to 5m0s for pod "pod-configmaps-a04b51c9-0b64-4b76-b3b4-632eac359d24" in namespace "configmap-141" to be "success or failure"
Aug 16 21:35:03.996: INFO: Pod "pod-configmaps-a04b51c9-0b64-4b76-b3b4-632eac359d24": Phase="Pending", Reason="", readiness=false. Elapsed: 39.072425ms
Aug 16 21:35:06.046: INFO: Pod "pod-configmaps-a04b51c9-0b64-4b76-b3b4-632eac359d24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088503263s
Aug 16 21:35:08.053: INFO: Pod "pod-configmaps-a04b51c9-0b64-4b76-b3b4-632eac359d24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096156081s
STEP: Saw pod success
Aug 16 21:35:08.053: INFO: Pod "pod-configmaps-a04b51c9-0b64-4b76-b3b4-632eac359d24" satisfied condition "success or failure"
Aug 16 21:35:08.058: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a04b51c9-0b64-4b76-b3b4-632eac359d24 container configmap-volume-test: 
STEP: delete the pod
Aug 16 21:35:08.116: INFO: Waiting for pod pod-configmaps-a04b51c9-0b64-4b76-b3b4-632eac359d24 to disappear
Aug 16 21:35:08.218: INFO: Pod pod-configmaps-a04b51c9-0b64-4b76-b3b4-632eac359d24 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:35:08.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-141" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3878,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:35:08.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9842
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9842
STEP: Creating statefulset with conflicting port in namespace statefulset-9842
STEP: Waiting until pod test-pod will start running in namespace statefulset-9842
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9842
Aug 16 21:35:12.431: INFO: Observed stateful pod in namespace: statefulset-9842, name: ss-0, uid: 99edd88c-a100-4f31-8057-3457735deb2e, status phase: Pending. Waiting for statefulset controller to delete.
Aug 16 21:35:12.944: INFO: Observed stateful pod in namespace: statefulset-9842, name: ss-0, uid: 99edd88c-a100-4f31-8057-3457735deb2e, status phase: Failed. Waiting for statefulset controller to delete.
Aug 16 21:35:13.167: INFO: Observed stateful pod in namespace: statefulset-9842, name: ss-0, uid: 99edd88c-a100-4f31-8057-3457735deb2e, status phase: Failed. Waiting for statefulset controller to delete.
Aug 16 21:35:13.383: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9842
STEP: Removing pod with conflicting port in namespace statefulset-9842
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9842 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 16 21:35:19.645: INFO: Deleting all statefulset in ns statefulset-9842
Aug 16 21:35:19.649: INFO: Scaling statefulset ss to 0
Aug 16 21:35:39.698: INFO: Waiting for statefulset status.replicas updated to 0
Aug 16 21:35:39.703: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:35:39.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9842" for this suite.

• [SLOW TEST:31.535 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":227,"skipped":3882,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:35:39.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:35:39.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:35:44.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3814" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3883,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:35:44.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-cb26b5b9-e7bc-48a9-a235-f96ec9c74dd7
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:35:44.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9747" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":229,"skipped":3883,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:35:44.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 16 21:35:44.278: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:44.313: INFO: Number of nodes with available pods: 0
Aug 16 21:35:44.313: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:35:45.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:45.334: INFO: Number of nodes with available pods: 0
Aug 16 21:35:45.334: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:35:46.537: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:46.543: INFO: Number of nodes with available pods: 0
Aug 16 21:35:46.543: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:35:47.566: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:47.619: INFO: Number of nodes with available pods: 0
Aug 16 21:35:47.619: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:35:48.323: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:48.330: INFO: Number of nodes with available pods: 0
Aug 16 21:35:48.330: INFO: Node jerma-worker is running more than one daemon pod
Aug 16 21:35:49.331: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:49.337: INFO: Number of nodes with available pods: 2
Aug 16 21:35:49.337: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 16 21:35:49.399: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:49.410: INFO: Number of nodes with available pods: 1
Aug 16 21:35:49.410: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:50.424: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:50.431: INFO: Number of nodes with available pods: 1
Aug 16 21:35:50.431: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:51.421: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:51.427: INFO: Number of nodes with available pods: 1
Aug 16 21:35:51.427: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:52.420: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:52.426: INFO: Number of nodes with available pods: 1
Aug 16 21:35:52.426: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:53.423: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:53.428: INFO: Number of nodes with available pods: 1
Aug 16 21:35:53.428: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:54.419: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:54.424: INFO: Number of nodes with available pods: 1
Aug 16 21:35:54.424: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:55.645: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:55.706: INFO: Number of nodes with available pods: 1
Aug 16 21:35:55.706: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:56.611: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:56.617: INFO: Number of nodes with available pods: 1
Aug 16 21:35:56.617: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:57.422: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:57.429: INFO: Number of nodes with available pods: 1
Aug 16 21:35:57.429: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 16 21:35:58.420: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 16 21:35:58.426: INFO: Number of nodes with available pods: 2
Aug 16 21:35:58.426: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8508, will wait for the garbage collector to delete the pods
Aug 16 21:35:58.494: INFO: Deleting DaemonSet.extensions daemon-set took: 8.246674ms
Aug 16 21:35:58.794: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.672311ms
Aug 16 21:36:11.701: INFO: Number of nodes with available pods: 0
Aug 16 21:36:11.701: INFO: Number of running nodes: 0, number of available pods: 0
Aug 16 21:36:11.706: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8508/daemonsets","resourceVersion":"510628"},"items":null}

Aug 16 21:36:11.710: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8508/pods","resourceVersion":"510628"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:36:11.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8508" for this suite.

• [SLOW TEST:27.602 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":230,"skipped":3895,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:36:11.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:36:11.857: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:36:12.117: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4c34dbad-743b-4285-aa29-eef1e3b3bd5b" in namespace "security-context-test-6992" to be "success or failure"
Aug 16 21:36:12.127: INFO: Pod "alpine-nnp-false-4c34dbad-743b-4285-aa29-eef1e3b3bd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.464418ms
Aug 16 21:36:14.231: INFO: Pod "alpine-nnp-false-4c34dbad-743b-4285-aa29-eef1e3b3bd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114152651s
Aug 16 21:36:16.481: INFO: Pod "alpine-nnp-false-4c34dbad-743b-4285-aa29-eef1e3b3bd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363752425s
Aug 16 21:36:18.487: INFO: Pod "alpine-nnp-false-4c34dbad-743b-4285-aa29-eef1e3b3bd5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.369901459s
Aug 16 21:36:18.487: INFO: Pod "alpine-nnp-false-4c34dbad-743b-4285-aa29-eef1e3b3bd5b" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:36:18.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6992" for this suite.

• [SLOW TEST:6.539 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":4011,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:36:18.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:36:18.607: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e31e8427-e307-4c24-a84e-ecb111df7f99" in namespace "security-context-test-1886" to be "success or failure"
Aug 16 21:36:18.626: INFO: Pod "busybox-user-65534-e31e8427-e307-4c24-a84e-ecb111df7f99": Phase="Pending", Reason="", readiness=false. Elapsed: 19.233803ms
Aug 16 21:36:20.651: INFO: Pod "busybox-user-65534-e31e8427-e307-4c24-a84e-ecb111df7f99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043768415s
Aug 16 21:36:22.658: INFO: Pod "busybox-user-65534-e31e8427-e307-4c24-a84e-ecb111df7f99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050922385s
Aug 16 21:36:24.665: INFO: Pod "busybox-user-65534-e31e8427-e307-4c24-a84e-ecb111df7f99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058241089s
Aug 16 21:36:24.666: INFO: Pod "busybox-user-65534-e31e8427-e307-4c24-a84e-ecb111df7f99" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:36:24.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1886" for this suite.

• [SLOW TEST:6.163 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":4020,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:36:24.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:36:25.230: INFO: Creating ReplicaSet my-hostname-basic-e7b8d520-6950-43c5-b5b0-e8edb8c088b9
Aug 16 21:36:25.548: INFO: Pod name my-hostname-basic-e7b8d520-6950-43c5-b5b0-e8edb8c088b9: Found 0 pods out of 1
Aug 16 21:36:30.555: INFO: Pod name my-hostname-basic-e7b8d520-6950-43c5-b5b0-e8edb8c088b9: Found 1 pods out of 1
Aug 16 21:36:30.555: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e7b8d520-6950-43c5-b5b0-e8edb8c088b9" is running
Aug 16 21:36:30.561: INFO: Pod "my-hostname-basic-e7b8d520-6950-43c5-b5b0-e8edb8c088b9-jx9wj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 21:36:25 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 21:36:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 21:36:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-16 21:36:25 +0000 UTC Reason: Message:}])
Aug 16 21:36:30.561: INFO: Trying to dial the pod
Aug 16 21:36:35.574: INFO: Controller my-hostname-basic-e7b8d520-6950-43c5-b5b0-e8edb8c088b9: Got expected result from replica 1 [my-hostname-basic-e7b8d520-6950-43c5-b5b0-e8edb8c088b9-jx9wj]: "my-hostname-basic-e7b8d520-6950-43c5-b5b0-e8edb8c088b9-jx9wj", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:36:35.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9975" for this suite.

• [SLOW TEST:10.905 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":234,"skipped":4022,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:36:35.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:36:35.771: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-265a63fa-0533-45fb-ba72-38679953f538" in namespace "security-context-test-7232" to be "success or failure"
Aug 16 21:36:35.786: INFO: Pod "busybox-privileged-false-265a63fa-0533-45fb-ba72-38679953f538": Phase="Pending", Reason="", readiness=false. Elapsed: 14.257306ms
Aug 16 21:36:37.885: INFO: Pod "busybox-privileged-false-265a63fa-0533-45fb-ba72-38679953f538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113172665s
Aug 16 21:36:39.892: INFO: Pod "busybox-privileged-false-265a63fa-0533-45fb-ba72-38679953f538": Phase="Running", Reason="", readiness=true. Elapsed: 4.120050936s
Aug 16 21:36:41.898: INFO: Pod "busybox-privileged-false-265a63fa-0533-45fb-ba72-38679953f538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126903228s
Aug 16 21:36:41.899: INFO: Pod "busybox-privileged-false-265a63fa-0533-45fb-ba72-38679953f538" satisfied condition "success or failure"
Aug 16 21:36:41.907: INFO: Got logs for pod "busybox-privileged-false-265a63fa-0533-45fb-ba72-38679953f538": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:36:41.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7232" for this suite.

• [SLOW TEST:6.334 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":4037,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:36:41.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 16 21:36:41.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63ee6e3c-1beb-4789-a838-b14cdaa2ce8f" in namespace "downward-api-3023" to be "success or failure"
Aug 16 21:36:42.047: INFO: Pod "downwardapi-volume-63ee6e3c-1beb-4789-a838-b14cdaa2ce8f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.86264ms
Aug 16 21:36:44.197: INFO: Pod "downwardapi-volume-63ee6e3c-1beb-4789-a838-b14cdaa2ce8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198464124s
Aug 16 21:36:46.204: INFO: Pod "downwardapi-volume-63ee6e3c-1beb-4789-a838-b14cdaa2ce8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204626753s
STEP: Saw pod success
Aug 16 21:36:46.204: INFO: Pod "downwardapi-volume-63ee6e3c-1beb-4789-a838-b14cdaa2ce8f" satisfied condition "success or failure"
Aug 16 21:36:46.208: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-63ee6e3c-1beb-4789-a838-b14cdaa2ce8f container client-container: 
STEP: delete the pod
Aug 16 21:36:46.229: INFO: Waiting for pod downwardapi-volume-63ee6e3c-1beb-4789-a838-b14cdaa2ce8f to disappear
Aug 16 21:36:46.232: INFO: Pod downwardapi-volume-63ee6e3c-1beb-4789-a838-b14cdaa2ce8f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:36:46.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3023" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":4055,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:36:46.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 16 21:36:50.588: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:36:50.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1703" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":4094,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:36:50.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:36:50.855: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:36:51.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4820" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":238,"skipped":4100,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:36:51.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-ab40e545-1d12-41c6-9d52-c7b3fc8dabf4
STEP: Creating secret with name s-test-opt-upd-b7f28c04-3265-4db2-a188-256ff915b765
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-ab40e545-1d12-41c6-9d52-c7b3fc8dabf4
STEP: Updating secret s-test-opt-upd-b7f28c04-3265-4db2-a188-256ff915b765
STEP: Creating secret with name s-test-opt-create-46ad8b33-68ed-45ac-b77a-9f292eeff66a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:37:06.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6924" for this suite.

• [SLOW TEST:14.894 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4103,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:37:06.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 16 21:37:07.039: INFO: Waiting up to 5m0s for pod "pod-890b9eda-5e0f-46bb-a5a0-7aae7ac1e47b" in namespace "emptydir-696" to be "success or failure"
Aug 16 21:37:07.067: INFO: Pod "pod-890b9eda-5e0f-46bb-a5a0-7aae7ac1e47b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.845908ms
Aug 16 21:37:09.435: INFO: Pod "pod-890b9eda-5e0f-46bb-a5a0-7aae7ac1e47b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396478621s
Aug 16 21:37:11.443: INFO: Pod "pod-890b9eda-5e0f-46bb-a5a0-7aae7ac1e47b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.403658609s
STEP: Saw pod success
Aug 16 21:37:11.443: INFO: Pod "pod-890b9eda-5e0f-46bb-a5a0-7aae7ac1e47b" satisfied condition "success or failure"
Aug 16 21:37:11.448: INFO: Trying to get logs from node jerma-worker2 pod pod-890b9eda-5e0f-46bb-a5a0-7aae7ac1e47b container test-container: 
STEP: delete the pod
Aug 16 21:37:11.742: INFO: Waiting for pod pod-890b9eda-5e0f-46bb-a5a0-7aae7ac1e47b to disappear
Aug 16 21:37:11.772: INFO: Pod pod-890b9eda-5e0f-46bb-a5a0-7aae7ac1e47b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:37:11.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-696" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4104,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:37:11.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:37:11.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8683
I0816 21:37:12.031494       7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8683, replica count: 1
I0816 21:37:13.082764       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0816 21:37:14.083357       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0816 21:37:15.083827       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0816 21:37:16.084256       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0816 21:37:17.084847       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 16 21:37:17.252: INFO: Created: latency-svc-c5krm
Aug 16 21:37:17.294: INFO: Got endpoints: latency-svc-c5krm [107.542454ms]
Aug 16 21:37:17.484: INFO: Created: latency-svc-njxvw
Aug 16 21:37:17.487: INFO: Got endpoints: latency-svc-njxvw [192.621045ms]
Aug 16 21:37:17.633: INFO: Created: latency-svc-nvzbq
Aug 16 21:37:17.636: INFO: Got endpoints: latency-svc-nvzbq [341.116947ms]
Aug 16 21:37:17.687: INFO: Created: latency-svc-d576p
Aug 16 21:37:17.714: INFO: Got endpoints: latency-svc-d576p [419.704208ms]
Aug 16 21:37:17.776: INFO: Created: latency-svc-m2j6s
Aug 16 21:37:17.781: INFO: Got endpoints: latency-svc-m2j6s [486.758216ms]
Aug 16 21:37:17.875: INFO: Created: latency-svc-pnmtj
Aug 16 21:37:17.916: INFO: Got endpoints: latency-svc-pnmtj [620.873687ms]
Aug 16 21:37:17.947: INFO: Created: latency-svc-jmvwq
Aug 16 21:37:17.958: INFO: Got endpoints: latency-svc-jmvwq [663.148079ms]
Aug 16 21:37:17.993: INFO: Created: latency-svc-kqglk
Aug 16 21:37:18.007: INFO: Got endpoints: latency-svc-kqglk [711.998323ms]
Aug 16 21:37:18.050: INFO: Created: latency-svc-llhwf
Aug 16 21:37:18.072: INFO: Got endpoints: latency-svc-llhwf [777.008287ms]
Aug 16 21:37:18.113: INFO: Created: latency-svc-rs5z6
Aug 16 21:37:18.202: INFO: Got endpoints: latency-svc-rs5z6 [906.572794ms]
Aug 16 21:37:18.253: INFO: Created: latency-svc-hv2hl
Aug 16 21:37:18.283: INFO: Got endpoints: latency-svc-hv2hl [987.993926ms]
Aug 16 21:37:18.363: INFO: Created: latency-svc-dsp5d
Aug 16 21:37:18.410: INFO: Created: latency-svc-6dzvp
Aug 16 21:37:18.410: INFO: Got endpoints: latency-svc-dsp5d [1.115881934s]
Aug 16 21:37:18.433: INFO: Got endpoints: latency-svc-6dzvp [1.138306948s]
Aug 16 21:37:18.543: INFO: Created: latency-svc-wxkpv
Aug 16 21:37:18.553: INFO: Got endpoints: latency-svc-wxkpv [1.258324935s]
Aug 16 21:37:18.587: INFO: Created: latency-svc-j87l5
Aug 16 21:37:18.611: INFO: Got endpoints: latency-svc-j87l5 [1.315217917s]
Aug 16 21:37:18.706: INFO: Created: latency-svc-xkjmk
Aug 16 21:37:18.759: INFO: Got endpoints: latency-svc-xkjmk [1.464541024s]
Aug 16 21:37:18.883: INFO: Created: latency-svc-q2rkf
Aug 16 21:37:18.883: INFO: Got endpoints: latency-svc-q2rkf [1.395973442s]
Aug 16 21:37:18.944: INFO: Created: latency-svc-pq7cv
Aug 16 21:37:18.966: INFO: Got endpoints: latency-svc-pq7cv [1.330552592s]
Aug 16 21:37:19.074: INFO: Created: latency-svc-mtz9h
Aug 16 21:37:19.092: INFO: Got endpoints: latency-svc-mtz9h [1.377225288s]
Aug 16 21:37:19.118: INFO: Created: latency-svc-b9m5s
Aug 16 21:37:19.134: INFO: Got endpoints: latency-svc-b9m5s [1.352622681s]
Aug 16 21:37:19.153: INFO: Created: latency-svc-6lrm5
Aug 16 21:37:19.226: INFO: Got endpoints: latency-svc-6lrm5 [1.310509166s]
Aug 16 21:37:19.249: INFO: Created: latency-svc-hxm2d
Aug 16 21:37:19.274: INFO: Got endpoints: latency-svc-hxm2d [1.316323816s]
Aug 16 21:37:19.274: INFO: Created: latency-svc-64t7l
Aug 16 21:37:19.290: INFO: Got endpoints: latency-svc-64t7l [1.283448975s]
Aug 16 21:37:19.390: INFO: Created: latency-svc-5bbv7
Aug 16 21:37:19.429: INFO: Got endpoints: latency-svc-5bbv7 [1.356820088s]
Aug 16 21:37:19.540: INFO: Created: latency-svc-79fzg
Aug 16 21:37:19.598: INFO: Created: latency-svc-nl7f4
Aug 16 21:37:19.599: INFO: Got endpoints: latency-svc-79fzg [1.397040579s]
Aug 16 21:37:19.621: INFO: Got endpoints: latency-svc-nl7f4 [1.338519924s]
Aug 16 21:37:19.705: INFO: Created: latency-svc-dnm7d
Aug 16 21:37:19.710: INFO: Got endpoints: latency-svc-dnm7d [1.29930285s]
Aug 16 21:37:19.790: INFO: Created: latency-svc-zhr4r
Aug 16 21:37:19.832: INFO: Got endpoints: latency-svc-zhr4r [1.39917378s]
Aug 16 21:37:19.856: INFO: Created: latency-svc-9qvhs
Aug 16 21:37:19.876: INFO: Got endpoints: latency-svc-9qvhs [1.322910314s]
Aug 16 21:37:19.932: INFO: Created: latency-svc-pbsbv
Aug 16 21:37:19.987: INFO: Got endpoints: latency-svc-pbsbv [1.375900453s]
Aug 16 21:37:20.000: INFO: Created: latency-svc-cnbtt
Aug 16 21:37:20.021: INFO: Got endpoints: latency-svc-cnbtt [1.26081797s]
Aug 16 21:37:20.149: INFO: Created: latency-svc-rdr7n
Aug 16 21:37:20.165: INFO: Got endpoints: latency-svc-rdr7n [1.28116938s]
Aug 16 21:37:20.310: INFO: Created: latency-svc-rqln2
Aug 16 21:37:20.310: INFO: Got endpoints: latency-svc-rqln2 [1.343542605s]
Aug 16 21:37:20.354: INFO: Created: latency-svc-j559k
Aug 16 21:37:20.385: INFO: Got endpoints: latency-svc-j559k [1.292815118s]
Aug 16 21:37:20.466: INFO: Created: latency-svc-d7tqx
Aug 16 21:37:20.521: INFO: Got endpoints: latency-svc-d7tqx [1.386536528s]
Aug 16 21:37:20.521: INFO: Created: latency-svc-6k299
Aug 16 21:37:20.543: INFO: Got endpoints: latency-svc-6k299 [1.316208969s]
Aug 16 21:37:20.627: INFO: Created: latency-svc-n7kp5
Aug 16 21:37:20.631: INFO: Got endpoints: latency-svc-n7kp5 [1.356792261s]
Aug 16 21:37:20.693: INFO: Created: latency-svc-bgmcj
Aug 16 21:37:20.713: INFO: Got endpoints: latency-svc-bgmcj [1.421923279s]
Aug 16 21:37:20.786: INFO: Created: latency-svc-5kl88
Aug 16 21:37:20.803: INFO: Got endpoints: latency-svc-5kl88 [1.373782034s]
Aug 16 21:37:20.837: INFO: Created: latency-svc-lpnpm
Aug 16 21:37:20.857: INFO: Got endpoints: latency-svc-lpnpm [1.257723331s]
Aug 16 21:37:20.910: INFO: Created: latency-svc-dlwfw
Aug 16 21:37:20.930: INFO: Got endpoints: latency-svc-dlwfw [1.308437202s]
Aug 16 21:37:20.975: INFO: Created: latency-svc-glrvh
Aug 16 21:37:20.990: INFO: Got endpoints: latency-svc-glrvh [1.279664169s]
Aug 16 21:37:21.043: INFO: Created: latency-svc-bms8d
Aug 16 21:37:21.059: INFO: Got endpoints: latency-svc-bms8d [1.226597604s]
Aug 16 21:37:21.107: INFO: Created: latency-svc-qq6zh
Aug 16 21:37:21.125: INFO: Got endpoints: latency-svc-qq6zh [1.248682676s]
Aug 16 21:37:21.207: INFO: Created: latency-svc-cjpzq
Aug 16 21:37:21.261: INFO: Created: latency-svc-5q6w7
Aug 16 21:37:21.263: INFO: Got endpoints: latency-svc-cjpzq [1.275815688s]
Aug 16 21:37:21.266: INFO: Got endpoints: latency-svc-5q6w7 [1.245139564s]
Aug 16 21:37:21.399: INFO: Created: latency-svc-t8h4w
Aug 16 21:37:21.403: INFO: Got endpoints: latency-svc-t8h4w [1.237978588s]
Aug 16 21:37:21.452: INFO: Created: latency-svc-xqwbb
Aug 16 21:37:21.462: INFO: Got endpoints: latency-svc-xqwbb [1.151716165s]
Aug 16 21:37:21.492: INFO: Created: latency-svc-zp8gd
Aug 16 21:37:21.629: INFO: Got endpoints: latency-svc-zp8gd [1.243922989s]
Aug 16 21:37:21.631: INFO: Created: latency-svc-bpn5j
Aug 16 21:37:21.636: INFO: Got endpoints: latency-svc-bpn5j [1.115178921s]
Aug 16 21:37:21.680: INFO: Created: latency-svc-q7gcm
Aug 16 21:37:21.697: INFO: Got endpoints: latency-svc-q7gcm [1.154049197s]
Aug 16 21:37:21.777: INFO: Created: latency-svc-z52dn
Aug 16 21:37:21.793: INFO: Got endpoints: latency-svc-z52dn [1.161569612s]
Aug 16 21:37:21.814: INFO: Created: latency-svc-5lff2
Aug 16 21:37:21.823: INFO: Got endpoints: latency-svc-5lff2 [1.110228138s]
Aug 16 21:37:21.846: INFO: Created: latency-svc-rldl8
Aug 16 21:37:21.859: INFO: Got endpoints: latency-svc-rldl8 [1.05593479s]
Aug 16 21:37:22.067: INFO: Created: latency-svc-tw7b2
Aug 16 21:37:22.070: INFO: Got endpoints: latency-svc-tw7b2 [1.21322974s]
Aug 16 21:37:22.349: INFO: Created: latency-svc-ml45q
Aug 16 21:37:22.363: INFO: Got endpoints: latency-svc-ml45q [1.432965572s]
Aug 16 21:37:22.424: INFO: Created: latency-svc-5mrqj
Aug 16 21:37:22.478: INFO: Got endpoints: latency-svc-5mrqj [1.488709785s]
Aug 16 21:37:22.513: INFO: Created: latency-svc-jbqpv
Aug 16 21:37:22.526: INFO: Got endpoints: latency-svc-jbqpv [1.466847496s]
Aug 16 21:37:22.547: INFO: Created: latency-svc-rnjt2
Aug 16 21:37:22.562: INFO: Got endpoints: latency-svc-rnjt2 [1.436566657s]
Aug 16 21:37:22.651: INFO: Created: latency-svc-hfhtm
Aug 16 21:37:22.655: INFO: Got endpoints: latency-svc-hfhtm [1.39200127s]
Aug 16 21:37:22.688: INFO: Created: latency-svc-2chwc
Aug 16 21:37:22.702: INFO: Got endpoints: latency-svc-2chwc [1.435748724s]
Aug 16 21:37:22.723: INFO: Created: latency-svc-trsmj
Aug 16 21:37:22.737: INFO: Got endpoints: latency-svc-trsmj [1.334457489s]
Aug 16 21:37:22.790: INFO: Created: latency-svc-wckdz
Aug 16 21:37:22.799: INFO: Got endpoints: latency-svc-wckdz [1.336480279s]
Aug 16 21:37:22.829: INFO: Created: latency-svc-tr6h4
Aug 16 21:37:22.860: INFO: Got endpoints: latency-svc-tr6h4 [1.230686449s]
Aug 16 21:37:22.939: INFO: Created: latency-svc-6gjlw
Aug 16 21:37:22.941: INFO: Got endpoints: latency-svc-6gjlw [1.305210355s]
Aug 16 21:37:22.981: INFO: Created: latency-svc-km48b
Aug 16 21:37:22.985: INFO: Got endpoints: latency-svc-km48b [1.287619177s]
Aug 16 21:37:23.012: INFO: Created: latency-svc-9bq9b
Aug 16 21:37:23.014: INFO: Got endpoints: latency-svc-9bq9b [1.22143842s]
Aug 16 21:37:23.076: INFO: Created: latency-svc-f8wr2
Aug 16 21:37:23.079: INFO: Got endpoints: latency-svc-f8wr2 [1.256139845s]
Aug 16 21:37:23.111: INFO: Created: latency-svc-f57bc
Aug 16 21:37:23.124: INFO: Got endpoints: latency-svc-f57bc [1.264233675s]
Aug 16 21:37:23.147: INFO: Created: latency-svc-7ttnj
Aug 16 21:37:23.160: INFO: Got endpoints: latency-svc-7ttnj [1.089636304s]
Aug 16 21:37:23.233: INFO: Created: latency-svc-gs67v
Aug 16 21:37:23.237: INFO: Got endpoints: latency-svc-gs67v [873.272813ms]
Aug 16 21:37:23.263: INFO: Created: latency-svc-52x6m
Aug 16 21:37:23.286: INFO: Got endpoints: latency-svc-52x6m [807.319889ms]
Aug 16 21:37:23.305: INFO: Created: latency-svc-m698p
Aug 16 21:37:23.322: INFO: Got endpoints: latency-svc-m698p [795.859947ms]
Aug 16 21:37:23.388: INFO: Created: latency-svc-jq49g
Aug 16 21:37:23.392: INFO: Got endpoints: latency-svc-jq49g [829.634956ms]
Aug 16 21:37:23.428: INFO: Created: latency-svc-xljr5
Aug 16 21:37:23.448: INFO: Got endpoints: latency-svc-xljr5 [792.703511ms]
Aug 16 21:37:23.464: INFO: Created: latency-svc-ntnpj
Aug 16 21:37:23.479: INFO: Got endpoints: latency-svc-ntnpj [777.013401ms]
Aug 16 21:37:23.531: INFO: Created: latency-svc-flmqf
Aug 16 21:37:23.539: INFO: Got endpoints: latency-svc-flmqf [801.554973ms]
Aug 16 21:37:23.603: INFO: Created: latency-svc-dm289
Aug 16 21:37:23.675: INFO: Got endpoints: latency-svc-dm289 [875.876889ms]
Aug 16 21:37:23.690: INFO: Created: latency-svc-2qp65
Aug 16 21:37:23.695: INFO: Got endpoints: latency-svc-2qp65 [835.434698ms]
Aug 16 21:37:23.720: INFO: Created: latency-svc-vvj27
Aug 16 21:37:23.733: INFO: Got endpoints: latency-svc-vvj27 [791.16965ms]
Aug 16 21:37:23.752: INFO: Created: latency-svc-4wksl
Aug 16 21:37:23.768: INFO: Got endpoints: latency-svc-4wksl [783.034259ms]
Aug 16 21:37:23.825: INFO: Created: latency-svc-qcsft
Aug 16 21:37:23.835: INFO: Got endpoints: latency-svc-qcsft [819.834883ms]
Aug 16 21:37:23.867: INFO: Created: latency-svc-r5rxw
Aug 16 21:37:23.883: INFO: Got endpoints: latency-svc-r5rxw [803.151909ms]
Aug 16 21:37:23.905: INFO: Created: latency-svc-2lnlh
Aug 16 21:37:23.920: INFO: Got endpoints: latency-svc-2lnlh [795.921533ms]
Aug 16 21:37:23.968: INFO: Created: latency-svc-xzf5p
Aug 16 21:37:23.972: INFO: Got endpoints: latency-svc-xzf5p [811.48404ms]
Aug 16 21:37:24.007: INFO: Created: latency-svc-5q7c2
Aug 16 21:37:24.022: INFO: Got endpoints: latency-svc-5q7c2 [784.637331ms]
Aug 16 21:37:24.040: INFO: Created: latency-svc-qjssq
Aug 16 21:37:24.052: INFO: Got endpoints: latency-svc-qjssq [766.26933ms]
Aug 16 21:37:24.124: INFO: Created: latency-svc-ct7nr
Aug 16 21:37:24.136: INFO: Got endpoints: latency-svc-ct7nr [813.729028ms]
Aug 16 21:37:24.187: INFO: Created: latency-svc-2m8qf
Aug 16 21:37:24.335: INFO: Got endpoints: latency-svc-2m8qf [943.471239ms]
Aug 16 21:37:24.339: INFO: Created: latency-svc-n96j2
Aug 16 21:37:24.621: INFO: Got endpoints: latency-svc-n96j2 [1.172987321s]
Aug 16 21:37:24.987: INFO: Created: latency-svc-8hrx5
Aug 16 21:37:24.994: INFO: Got endpoints: latency-svc-8hrx5 [1.514458944s]
Aug 16 21:37:25.229: INFO: Created: latency-svc-s8dxn
Aug 16 21:37:25.378: INFO: Got endpoints: latency-svc-s8dxn [1.8380836s]
Aug 16 21:37:25.378: INFO: Created: latency-svc-qwt8v
Aug 16 21:37:25.398: INFO: Got endpoints: latency-svc-qwt8v [1.722913127s]
Aug 16 21:37:25.465: INFO: Created: latency-svc-f47cl
Aug 16 21:37:25.574: INFO: Created: latency-svc-hqspp
Aug 16 21:37:25.574: INFO: Got endpoints: latency-svc-f47cl [1.87823542s]
Aug 16 21:37:25.600: INFO: Got endpoints: latency-svc-hqspp [1.866649977s]
Aug 16 21:37:25.651: INFO: Created: latency-svc-n9fd2
Aug 16 21:37:25.667: INFO: Got endpoints: latency-svc-n9fd2 [1.898602047s]
Aug 16 21:37:25.723: INFO: Created: latency-svc-7qdbn
Aug 16 21:37:25.733: INFO: Got endpoints: latency-svc-7qdbn [1.897904173s]
Aug 16 21:37:25.751: INFO: Created: latency-svc-gn8rz
Aug 16 21:37:25.763: INFO: Got endpoints: latency-svc-gn8rz [1.880339916s]
Aug 16 21:37:25.784: INFO: Created: latency-svc-zmszh
Aug 16 21:37:25.800: INFO: Got endpoints: latency-svc-zmszh [1.880130884s]
Aug 16 21:37:25.872: INFO: Created: latency-svc-2dlms
Aug 16 21:37:25.875: INFO: Got endpoints: latency-svc-2dlms [1.903062839s]
Aug 16 21:37:25.932: INFO: Created: latency-svc-xkjm8
Aug 16 21:37:25.950: INFO: Got endpoints: latency-svc-xkjm8 [1.927892047s]
Aug 16 21:37:25.968: INFO: Created: latency-svc-7bfq2
Aug 16 21:37:26.041: INFO: Created: latency-svc-tjjnj
Aug 16 21:37:26.041: INFO: Got endpoints: latency-svc-7bfq2 [1.988640307s]
Aug 16 21:37:26.059: INFO: Got endpoints: latency-svc-tjjnj [1.92242257s]
Aug 16 21:37:26.102: INFO: Created: latency-svc-vgxwt
Aug 16 21:37:26.125: INFO: Got endpoints: latency-svc-vgxwt [1.78906199s]
Aug 16 21:37:26.257: INFO: Created: latency-svc-h7cxz
Aug 16 21:37:26.261: INFO: Got endpoints: latency-svc-h7cxz [1.639416976s]
Aug 16 21:37:26.337: INFO: Created: latency-svc-j9d6h
Aug 16 21:37:26.355: INFO: Got endpoints: latency-svc-j9d6h [1.360860789s]
Aug 16 21:37:26.418: INFO: Created: latency-svc-g99fz
Aug 16 21:37:26.425: INFO: Got endpoints: latency-svc-g99fz [1.047591682s]
Aug 16 21:37:26.444: INFO: Created: latency-svc-tbwbs
Aug 16 21:37:26.455: INFO: Got endpoints: latency-svc-tbwbs [1.057389729s]
Aug 16 21:37:26.474: INFO: Created: latency-svc-7jn9w
Aug 16 21:37:26.492: INFO: Got endpoints: latency-svc-7jn9w [918.506105ms]
Aug 16 21:37:26.510: INFO: Created: latency-svc-s9s9m
Aug 16 21:37:26.580: INFO: Got endpoints: latency-svc-s9s9m [980.353802ms]
Aug 16 21:37:26.582: INFO: Created: latency-svc-km4rl
Aug 16 21:37:26.594: INFO: Got endpoints: latency-svc-km4rl [927.226159ms]
Aug 16 21:37:26.628: INFO: Created: latency-svc-crjxb
Aug 16 21:37:26.649: INFO: Got endpoints: latency-svc-crjxb [916.535905ms]
Aug 16 21:37:26.796: INFO: Created: latency-svc-bvgll
Aug 16 21:37:26.819: INFO: Got endpoints: latency-svc-bvgll [1.055791298s]
Aug 16 21:37:26.843: INFO: Created: latency-svc-gmdzn
Aug 16 21:37:26.895: INFO: Got endpoints: latency-svc-gmdzn [1.095136597s]
Aug 16 21:37:27.317: INFO: Created: latency-svc-gvfrb
Aug 16 21:37:27.754: INFO: Got endpoints: latency-svc-gvfrb [1.878655755s]
Aug 16 21:37:27.758: INFO: Created: latency-svc-cb2c6
Aug 16 21:37:27.849: INFO: Got endpoints: latency-svc-cb2c6 [1.899064139s]
Aug 16 21:37:28.132: INFO: Created: latency-svc-h8kh8
Aug 16 21:37:28.421: INFO: Got endpoints: latency-svc-h8kh8 [2.379493841s]
Aug 16 21:37:28.591: INFO: Created: latency-svc-p8x4v
Aug 16 21:37:28.609: INFO: Got endpoints: latency-svc-p8x4v [2.550222586s]
Aug 16 21:37:28.760: INFO: Created: latency-svc-9sqjl
Aug 16 21:37:28.761: INFO: Got endpoints: latency-svc-9sqjl [2.636111161s]
Aug 16 21:37:29.294: INFO: Created: latency-svc-sp2gk
Aug 16 21:37:29.299: INFO: Got endpoints: latency-svc-sp2gk [3.03785223s]
Aug 16 21:37:29.594: INFO: Created: latency-svc-b2lsx
Aug 16 21:37:29.795: INFO: Got endpoints: latency-svc-b2lsx [3.440017912s]
Aug 16 21:37:29.798: INFO: Created: latency-svc-p4lv4
Aug 16 21:37:29.805: INFO: Got endpoints: latency-svc-p4lv4 [3.379542371s]
Aug 16 21:37:29.849: INFO: Created: latency-svc-9d9wv
Aug 16 21:37:30.278: INFO: Got endpoints: latency-svc-9d9wv [3.82256777s]
Aug 16 21:37:30.736: INFO: Created: latency-svc-sp66s
Aug 16 21:37:30.742: INFO: Got endpoints: latency-svc-sp66s [4.249435219s]
Aug 16 21:37:31.353: INFO: Created: latency-svc-hdptr
Aug 16 21:37:31.364: INFO: Got endpoints: latency-svc-hdptr [4.782814234s]
Aug 16 21:37:31.713: INFO: Created: latency-svc-t8dvs
Aug 16 21:37:31.718: INFO: Got endpoints: latency-svc-t8dvs [5.123596129s]
Aug 16 21:37:32.039: INFO: Created: latency-svc-7pz7z
Aug 16 21:37:32.077: INFO: Got endpoints: latency-svc-7pz7z [5.426986501s]
Aug 16 21:37:32.213: INFO: Created: latency-svc-js6wl
Aug 16 21:37:32.216: INFO: Got endpoints: latency-svc-js6wl [5.396684043s]
Aug 16 21:37:32.282: INFO: Created: latency-svc-96znn
Aug 16 21:37:32.406: INFO: Got endpoints: latency-svc-96znn [5.510595567s]
Aug 16 21:37:32.406: INFO: Created: latency-svc-nc57t
Aug 16 21:37:32.442: INFO: Got endpoints: latency-svc-nc57t [4.687226971s]
Aug 16 21:37:33.072: INFO: Created: latency-svc-w7jhc
Aug 16 21:37:33.076: INFO: Got endpoints: latency-svc-w7jhc [5.226796049s]
Aug 16 21:37:33.365: INFO: Created: latency-svc-5h6bk
Aug 16 21:37:33.748: INFO: Got endpoints: latency-svc-5h6bk [5.326464029s]
Aug 16 21:37:33.805: INFO: Created: latency-svc-fhg5h
Aug 16 21:37:33.827: INFO: Got endpoints: latency-svc-fhg5h [5.218227563s]
Aug 16 21:37:34.216: INFO: Created: latency-svc-l7s9v
Aug 16 21:37:34.719: INFO: Created: latency-svc-99xqn
Aug 16 21:37:34.720: INFO: Got endpoints: latency-svc-l7s9v [5.958779249s]
Aug 16 21:37:34.993: INFO: Got endpoints: latency-svc-99xqn [5.693677218s]
Aug 16 21:37:34.997: INFO: Created: latency-svc-vw5kt
Aug 16 21:37:35.023: INFO: Got endpoints: latency-svc-vw5kt [5.227795258s]
Aug 16 21:37:35.172: INFO: Created: latency-svc-rnp86
Aug 16 21:37:35.403: INFO: Got endpoints: latency-svc-rnp86 [5.598029776s]
Aug 16 21:37:35.448: INFO: Created: latency-svc-7gcg5
Aug 16 21:37:35.665: INFO: Got endpoints: latency-svc-7gcg5 [5.386585331s]
Aug 16 21:37:35.849: INFO: Created: latency-svc-ng77k
Aug 16 21:37:35.859: INFO: Got endpoints: latency-svc-ng77k [5.116309338s]
Aug 16 21:37:35.912: INFO: Created: latency-svc-zxssp
Aug 16 21:37:35.936: INFO: Got endpoints: latency-svc-zxssp [4.57261571s]
Aug 16 21:37:36.005: INFO: Created: latency-svc-6mxxs
Aug 16 21:37:36.026: INFO: Got endpoints: latency-svc-6mxxs [4.308190311s]
Aug 16 21:37:36.051: INFO: Created: latency-svc-z2wp2
Aug 16 21:37:36.058: INFO: Got endpoints: latency-svc-z2wp2 [3.980729035s]
Aug 16 21:37:36.087: INFO: Created: latency-svc-vzkpc
Aug 16 21:37:36.184: INFO: Got endpoints: latency-svc-vzkpc [3.968129031s]
Aug 16 21:37:36.201: INFO: Created: latency-svc-sqsxr
Aug 16 21:37:36.216: INFO: Got endpoints: latency-svc-sqsxr [3.809384586s]
Aug 16 21:37:36.237: INFO: Created: latency-svc-94clx
Aug 16 21:37:36.249: INFO: Got endpoints: latency-svc-94clx [3.807429788s]
Aug 16 21:37:36.358: INFO: Created: latency-svc-d6k6m
Aug 16 21:37:36.360: INFO: Got endpoints: latency-svc-d6k6m [3.283794548s]
Aug 16 21:37:36.430: INFO: Created: latency-svc-k7t5j
Aug 16 21:37:36.454: INFO: Got endpoints: latency-svc-k7t5j [2.706302777s]
Aug 16 21:37:36.539: INFO: Created: latency-svc-786wf
Aug 16 21:37:36.540: INFO: Got endpoints: latency-svc-786wf [2.71191647s]
Aug 16 21:37:36.625: INFO: Created: latency-svc-972gg
Aug 16 21:37:36.729: INFO: Got endpoints: latency-svc-972gg [2.008613749s]
Aug 16 21:37:36.730: INFO: Created: latency-svc-qqslc
Aug 16 21:37:36.754: INFO: Got endpoints: latency-svc-qqslc [1.760578725s]
Aug 16 21:37:36.932: INFO: Created: latency-svc-wg7tr
Aug 16 21:37:36.976: INFO: Got endpoints: latency-svc-wg7tr [1.95238715s]
Aug 16 21:37:37.145: INFO: Created: latency-svc-5nx29
Aug 16 21:37:37.187: INFO: Created: latency-svc-75nxd
Aug 16 21:37:37.189: INFO: Got endpoints: latency-svc-5nx29 [1.784962655s]
Aug 16 21:37:37.221: INFO: Got endpoints: latency-svc-75nxd [1.555496343s]
Aug 16 21:37:37.300: INFO: Created: latency-svc-kn8w6
Aug 16 21:37:37.321: INFO: Got endpoints: latency-svc-kn8w6 [1.461651715s]
Aug 16 21:37:37.364: INFO: Created: latency-svc-kc2mm
Aug 16 21:37:37.449: INFO: Got endpoints: latency-svc-kc2mm [1.512457841s]
Aug 16 21:37:37.675: INFO: Created: latency-svc-lvhd5
Aug 16 21:37:37.691: INFO: Got endpoints: latency-svc-lvhd5 [1.664173179s]
Aug 16 21:37:38.467: INFO: Created: latency-svc-zhb6j
Aug 16 21:37:38.498: INFO: Got endpoints: latency-svc-zhb6j [2.440340888s]
Aug 16 21:37:38.547: INFO: Created: latency-svc-nsnq9
Aug 16 21:37:38.833: INFO: Got endpoints: latency-svc-nsnq9 [2.648638239s]
Aug 16 21:37:39.073: INFO: Created: latency-svc-dfbvb
Aug 16 21:37:39.075: INFO: Got endpoints: latency-svc-dfbvb [2.859396148s]
Aug 16 21:37:39.250: INFO: Created: latency-svc-6mhxj
Aug 16 21:37:39.253: INFO: Got endpoints: latency-svc-6mhxj [3.002818516s]
Aug 16 21:37:39.447: INFO: Created: latency-svc-q4476
Aug 16 21:37:39.452: INFO: Got endpoints: latency-svc-q4476 [3.091010821s]
Aug 16 21:37:39.767: INFO: Created: latency-svc-s9njp
Aug 16 21:37:39.833: INFO: Got endpoints: latency-svc-s9njp [3.378138006s]
Aug 16 21:37:40.144: INFO: Created: latency-svc-9rfns
Aug 16 21:37:40.148: INFO: Got endpoints: latency-svc-9rfns [3.608294787s]
Aug 16 21:37:40.292: INFO: Created: latency-svc-5f8wd
Aug 16 21:37:40.304: INFO: Got endpoints: latency-svc-5f8wd [3.575126727s]
Aug 16 21:37:40.321: INFO: Created: latency-svc-w5bxm
Aug 16 21:37:40.348: INFO: Got endpoints: latency-svc-w5bxm [3.594242175s]
Aug 16 21:37:40.460: INFO: Created: latency-svc-84b4q
Aug 16 21:37:40.491: INFO: Got endpoints: latency-svc-84b4q [3.514841512s]
Aug 16 21:37:40.526: INFO: Created: latency-svc-fb9qk
Aug 16 21:37:40.645: INFO: Got endpoints: latency-svc-fb9qk [3.456016779s]
Aug 16 21:37:40.656: INFO: Created: latency-svc-lz8qs
Aug 16 21:37:40.671: INFO: Got endpoints: latency-svc-lz8qs [3.450046317s]
Aug 16 21:37:40.965: INFO: Created: latency-svc-djrps
Aug 16 21:37:41.011: INFO: Got endpoints: latency-svc-djrps [3.689762798s]
Aug 16 21:37:41.040: INFO: Created: latency-svc-6qdvd
Aug 16 21:37:41.061: INFO: Got endpoints: latency-svc-6qdvd [3.611681858s]
Aug 16 21:37:41.178: INFO: Created: latency-svc-s4x66
Aug 16 21:37:41.211: INFO: Got endpoints: latency-svc-s4x66 [3.520166455s]
Aug 16 21:37:41.334: INFO: Created: latency-svc-rx4m7
Aug 16 21:37:41.338: INFO: Got endpoints: latency-svc-rx4m7 [2.839497624s]
Aug 16 21:37:41.413: INFO: Created: latency-svc-zmp5s
Aug 16 21:37:41.478: INFO: Got endpoints: latency-svc-zmp5s [2.644644989s]
Aug 16 21:37:41.540: INFO: Created: latency-svc-9zqz8
Aug 16 21:37:41.560: INFO: Got endpoints: latency-svc-9zqz8 [2.48426067s]
Aug 16 21:37:41.638: INFO: Created: latency-svc-wbkbf
Aug 16 21:37:41.668: INFO: Got endpoints: latency-svc-wbkbf [2.415253866s]
Aug 16 21:37:41.714: INFO: Created: latency-svc-kqxn4
Aug 16 21:37:41.801: INFO: Got endpoints: latency-svc-kqxn4 [2.348869353s]
Aug 16 21:37:41.807: INFO: Created: latency-svc-9qj82
Aug 16 21:37:41.886: INFO: Got endpoints: latency-svc-9qj82 [2.053354939s]
Aug 16 21:37:42.373: INFO: Created: latency-svc-r68r6
Aug 16 21:37:42.407: INFO: Got endpoints: latency-svc-r68r6 [2.258685924s]
Aug 16 21:37:42.574: INFO: Created: latency-svc-j7wqn
Aug 16 21:37:42.664: INFO: Got endpoints: latency-svc-j7wqn [2.35889794s]
Aug 16 21:37:42.750: INFO: Created: latency-svc-dz2l7
Aug 16 21:37:42.785: INFO: Got endpoints: latency-svc-dz2l7 [2.436346366s]
Aug 16 21:37:42.938: INFO: Created: latency-svc-n7ptj
Aug 16 21:37:42.994: INFO: Got endpoints: latency-svc-n7ptj [2.50264218s]
Aug 16 21:37:43.097: INFO: Created: latency-svc-sg2w9
Aug 16 21:37:43.144: INFO: Got endpoints: latency-svc-sg2w9 [2.498575142s]
Aug 16 21:37:43.256: INFO: Created: latency-svc-2cphg
Aug 16 21:37:43.324: INFO: Got endpoints: latency-svc-2cphg [2.652478154s]
Aug 16 21:37:43.324: INFO: Created: latency-svc-4hq5m
Aug 16 21:37:43.477: INFO: Got endpoints: latency-svc-4hq5m [2.466467469s]
Aug 16 21:37:43.479: INFO: Created: latency-svc-t558b
Aug 16 21:37:43.545: INFO: Got endpoints: latency-svc-t558b [2.483955415s]
Aug 16 21:37:44.079: INFO: Created: latency-svc-gts76
Aug 16 21:37:44.091: INFO: Got endpoints: latency-svc-gts76 [2.880215454s]
Aug 16 21:37:44.550: INFO: Created: latency-svc-jjwzt
Aug 16 21:37:44.554: INFO: Got endpoints: latency-svc-jjwzt [3.215994805s]
Aug 16 21:37:46.432: INFO: Created: latency-svc-jpzvw
Aug 16 21:37:46.453: INFO: Got endpoints: latency-svc-jpzvw [4.974450403s]
Aug 16 21:37:46.622: INFO: Created: latency-svc-cvlk8
Aug 16 21:37:46.667: INFO: Got endpoints: latency-svc-cvlk8 [5.107137896s]
Aug 16 21:37:46.817: INFO: Created: latency-svc-jprx2
Aug 16 21:37:46.859: INFO: Got endpoints: latency-svc-jprx2 [5.19124734s]
Aug 16 21:37:46.989: INFO: Created: latency-svc-d2l2j
Aug 16 21:37:46.997: INFO: Got endpoints: latency-svc-d2l2j [5.196354929s]
Aug 16 21:37:47.045: INFO: Created: latency-svc-nnbpk
Aug 16 21:37:47.058: INFO: Got endpoints: latency-svc-nnbpk [5.171283667s]
Aug 16 21:37:47.084: INFO: Created: latency-svc-864rt
Aug 16 21:37:47.137: INFO: Got endpoints: latency-svc-864rt [4.729996476s]
Aug 16 21:37:47.139: INFO: Created: latency-svc-9ll89
Aug 16 21:37:47.154: INFO: Got endpoints: latency-svc-9ll89 [4.490131286s]
Aug 16 21:37:47.203: INFO: Created: latency-svc-q578q
Aug 16 21:37:47.229: INFO: Got endpoints: latency-svc-q578q [4.443861162s]
Aug 16 21:37:47.304: INFO: Created: latency-svc-jn6d2
Aug 16 21:37:47.307: INFO: Got endpoints: latency-svc-jn6d2 [4.312510366s]
Aug 16 21:37:47.443: INFO: Created: latency-svc-6k9x5
Aug 16 21:37:47.485: INFO: Got endpoints: latency-svc-6k9x5 [4.341051636s]
Aug 16 21:37:47.524: INFO: Created: latency-svc-ftrl2
Aug 16 21:37:48.023: INFO: Got endpoints: latency-svc-ftrl2 [4.699580514s]
Aug 16 21:37:48.027: INFO: Created: latency-svc-4qp65
Aug 16 21:37:48.103: INFO: Got endpoints: latency-svc-4qp65 [4.625252606s]
Aug 16 21:37:48.179: INFO: Created: latency-svc-5mrjv
Aug 16 21:37:48.248: INFO: Got endpoints: latency-svc-5mrjv [4.701949888s]
Aug 16 21:37:48.388: INFO: Created: latency-svc-4fp7p
Aug 16 21:37:48.710: INFO: Got endpoints: latency-svc-4fp7p [4.618360705s]
Aug 16 21:37:48.711: INFO: Latencies: [192.621045ms 341.116947ms 419.704208ms 486.758216ms 620.873687ms 663.148079ms 711.998323ms 766.26933ms 777.008287ms 777.013401ms 783.034259ms 784.637331ms 791.16965ms 792.703511ms 795.859947ms 795.921533ms 801.554973ms 803.151909ms 807.319889ms 811.48404ms 813.729028ms 819.834883ms 829.634956ms 835.434698ms 873.272813ms 875.876889ms 906.572794ms 916.535905ms 918.506105ms 927.226159ms 943.471239ms 980.353802ms 987.993926ms 1.047591682s 1.055791298s 1.05593479s 1.057389729s 1.089636304s 1.095136597s 1.110228138s 1.115178921s 1.115881934s 1.138306948s 1.151716165s 1.154049197s 1.161569612s 1.172987321s 1.21322974s 1.22143842s 1.226597604s 1.230686449s 1.237978588s 1.243922989s 1.245139564s 1.248682676s 1.256139845s 1.257723331s 1.258324935s 1.26081797s 1.264233675s 1.275815688s 1.279664169s 1.28116938s 1.283448975s 1.287619177s 1.292815118s 1.29930285s 1.305210355s 1.308437202s 1.310509166s 1.315217917s 1.316208969s 1.316323816s 1.322910314s 1.330552592s 1.334457489s 1.336480279s 1.338519924s 1.343542605s 1.352622681s 1.356792261s 1.356820088s 1.360860789s 1.373782034s 1.375900453s 1.377225288s 1.386536528s 1.39200127s 1.395973442s 1.397040579s 1.39917378s 1.421923279s 1.432965572s 1.435748724s 1.436566657s 1.461651715s 1.464541024s 1.466847496s 1.488709785s 1.512457841s 1.514458944s 1.555496343s 1.639416976s 1.664173179s 1.722913127s 1.760578725s 1.784962655s 1.78906199s 1.8380836s 1.866649977s 1.87823542s 1.878655755s 1.880130884s 1.880339916s 1.897904173s 1.898602047s 1.899064139s 1.903062839s 1.92242257s 1.927892047s 1.95238715s 1.988640307s 2.008613749s 2.053354939s 2.258685924s 2.348869353s 2.35889794s 2.379493841s 2.415253866s 2.436346366s 2.440340888s 2.466467469s 2.483955415s 2.48426067s 2.498575142s 2.50264218s 2.550222586s 2.636111161s 2.644644989s 2.648638239s 2.652478154s 2.706302777s 2.71191647s 2.839497624s 2.859396148s 2.880215454s 3.002818516s 3.03785223s 3.091010821s 3.215994805s 3.283794548s 3.378138006s 3.379542371s 3.440017912s 3.450046317s 3.456016779s 3.514841512s 3.520166455s 3.575126727s 3.594242175s 3.608294787s 3.611681858s 3.689762798s 3.807429788s 3.809384586s 3.82256777s 3.968129031s 3.980729035s 4.249435219s 4.308190311s 4.312510366s 4.341051636s 4.443861162s 4.490131286s 4.57261571s 4.618360705s 4.625252606s 4.687226971s 4.699580514s 4.701949888s 4.729996476s 4.782814234s 4.974450403s 5.107137896s 5.116309338s 5.123596129s 5.171283667s 5.19124734s 5.196354929s 5.218227563s 5.226796049s 5.227795258s 5.326464029s 5.386585331s 5.396684043s 5.426986501s 5.510595567s 5.598029776s 5.693677218s 5.958779249s]
Aug 16 21:37:48.712: INFO: 50 %ile: 1.514458944s
Aug 16 21:37:48.712: INFO: 90 %ile: 4.729996476s
Aug 16 21:37:48.712: INFO: 99 %ile: 5.693677218s
Aug 16 21:37:48.712: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:37:48.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8683" for this suite.

• [SLOW TEST:37.234 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":241,"skipped":4116,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:37:49.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 16 21:38:01.959: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 16 21:38:02.049: INFO: Pod pod-with-poststart-http-hook still exists
Aug 16 21:38:04.050: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 16 21:38:04.084: INFO: Pod pod-with-poststart-http-hook still exists
Aug 16 21:38:06.050: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 16 21:38:06.060: INFO: Pod pod-with-poststart-http-hook still exists
Aug 16 21:38:08.050: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 16 21:38:09.789: INFO: Pod pod-with-poststart-http-hook still exists
Aug 16 21:38:10.050: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 16 21:38:10.174: INFO: Pod pod-with-poststart-http-hook still exists
Aug 16 21:38:12.050: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 16 21:38:12.075: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:38:12.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1854" for this suite.

• [SLOW TEST:23.156 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4138,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:38:12.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 16 21:38:12.521: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1" in namespace "projected-3510" to be "success or failure"
Aug 16 21:38:12.557: INFO: Pod "downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1": Phase="Pending", Reason="", readiness=false. Elapsed: 35.262928ms
Aug 16 21:38:14.675: INFO: Pod "downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153509201s
Aug 16 21:38:16.712: INFO: Pod "downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190650454s
Aug 16 21:38:18.747: INFO: Pod "downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.225304972s
STEP: Saw pod success
Aug 16 21:38:18.747: INFO: Pod "downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1" satisfied condition "success or failure"
Aug 16 21:38:18.813: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1 container client-container: 
STEP: delete the pod
Aug 16 21:38:19.226: INFO: Waiting for pod downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1 to disappear
Aug 16 21:38:19.255: INFO: Pod downwardapi-volume-59ecf9c3-e447-49bb-b45a-10645ff1faa1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:38:19.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3510" for this suite.

• [SLOW TEST:7.214 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4162,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:38:19.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 16 21:38:19.834: INFO: Waiting up to 5m0s for pod "pod-9d0706e2-1038-4006-92db-f4b71362f4d2" in namespace "emptydir-2209" to be "success or failure"
Aug 16 21:38:19.857: INFO: Pod "pod-9d0706e2-1038-4006-92db-f4b71362f4d2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.602701ms
Aug 16 21:38:22.969: INFO: Pod "pod-9d0706e2-1038-4006-92db-f4b71362f4d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.134424418s
Aug 16 21:38:25.036: INFO: Pod "pod-9d0706e2-1038-4006-92db-f4b71362f4d2": Phase="Running", Reason="", readiness=true. Elapsed: 5.20150018s
Aug 16 21:38:27.275: INFO: Pod "pod-9d0706e2-1038-4006-92db-f4b71362f4d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.440959272s
STEP: Saw pod success
Aug 16 21:38:27.276: INFO: Pod "pod-9d0706e2-1038-4006-92db-f4b71362f4d2" satisfied condition "success or failure"
Aug 16 21:38:27.285: INFO: Trying to get logs from node jerma-worker2 pod pod-9d0706e2-1038-4006-92db-f4b71362f4d2 container test-container: 
STEP: delete the pod
Aug 16 21:38:27.719: INFO: Waiting for pod pod-9d0706e2-1038-4006-92db-f4b71362f4d2 to disappear
Aug 16 21:38:27.734: INFO: Pod pod-9d0706e2-1038-4006-92db-f4b71362f4d2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:38:27.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2209" for this suite.

• [SLOW TEST:8.554 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4169,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:38:27.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 16 21:38:28.094: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 16 21:38:28.142: INFO: Waiting for terminating namespaces to be deleted...
Aug 16 21:38:28.145: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 16 21:38:28.154: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:38:28.154: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 16 21:38:28.154: INFO: pod-handle-http-request from container-lifecycle-hook-1854 started at 2020-08-16 21:37:49 +0000 UTC (1 container statuses recorded)
Aug 16 21:38:28.154: INFO: 	Container pod-handle-http-request ready: false, restart count 0
Aug 16 21:38:28.154: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:38:28.154: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 16 21:38:28.154: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 16 21:38:28.232: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:38:28.232: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 16 21:38:28.232: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 16 21:38:28.232: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f194eaa0-0141-48ac-9815-4694b7eac83d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f194eaa0-0141-48ac-9815-4694b7eac83d off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f194eaa0-0141-48ac-9815-4694b7eac83d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:38:37.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2846" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:9.404 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":245,"skipped":4173,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:38:37.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:38:37.623: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:38:38.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8963" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":246,"skipped":4189,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:38:38.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:38:45.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1947" for this suite.

• [SLOW TEST:6.778 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4218,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:38:45.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:38:46.295: INFO: Creating deployment "test-recreate-deployment"
Aug 16 21:38:46.484: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 16 21:38:46.581: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 16 21:38:48.975: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 16 21:38:49.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210726, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210726, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210726, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210726, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:38:51.069: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 16 21:38:51.268: INFO: Updating deployment test-recreate-deployment
Aug 16 21:38:51.268: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 16 21:38:52.300: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-8288 /apis/apps/v1/namespaces/deployment-8288/deployments/test-recreate-deployment 39b31b50-ea84-4cbb-8183-f28877ddf4c7 512748 2 2020-08-16 21:38:46 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40045adaf8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-16 21:38:51 +0000 UTC,LastTransitionTime:2020-08-16 21:38:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-16 21:38:52 +0000 UTC,LastTransitionTime:2020-08-16 21:38:46 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 16 21:38:52.728: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-8288 /apis/apps/v1/namespaces/deployment-8288/replicasets/test-recreate-deployment-5f94c574ff 79bfffd3-95b6-416f-815e-f2f838758675 512745 1 2020-08-16 21:38:51 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 39b31b50-ea84-4cbb-8183-f28877ddf4c7 0x40045ade77 0x40045ade78}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40045aded8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 16 21:38:52.728: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 16 21:38:52.729: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-8288 /apis/apps/v1/namespaces/deployment-8288/replicasets/test-recreate-deployment-799c574856 c48bfebf-da6f-408e-8e85-2f62a26bf633 512734 2 2020-08-16 21:38:46 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 39b31b50-ea84-4cbb-8183-f28877ddf4c7 0x40045adf47 0x40045adf48}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40045adfb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 16 21:38:52.774: INFO: Pod "test-recreate-deployment-5f94c574ff-thbbk" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-thbbk test-recreate-deployment-5f94c574ff- deployment-8288 /api/v1/namespaces/deployment-8288/pods/test-recreate-deployment-5f94c574ff-thbbk 362fab22-1f4f-4a91-9dd8-62759a474b5b 512744 0 2020-08-16 21:38:51 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 79bfffd3-95b6-416f-815e-f2f838758675 0x4003b29a47 0x4003b29a48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vqppv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vqppv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vqppv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:38:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:38:52.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8288" for this suite.

• [SLOW TEST:7.350 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":248,"skipped":4219,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:38:53.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 16 21:38:53.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9146'
Aug 16 21:39:03.817: INFO: stderr: ""
Aug 16 21:39:03.817: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 16 21:39:03.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146'
Aug 16 21:39:05.026: INFO: stderr: ""
Aug 16 21:39:05.026: INFO: stdout: "update-demo-nautilus-khq5m update-demo-nautilus-xzh8q "
Aug 16 21:39:05.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-khq5m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:06.270: INFO: stderr: ""
Aug 16 21:39:06.270: INFO: stdout: ""
Aug 16 21:39:06.270: INFO: update-demo-nautilus-khq5m is created but not running
Aug 16 21:39:11.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146'
Aug 16 21:39:13.094: INFO: stderr: ""
Aug 16 21:39:13.095: INFO: stdout: "update-demo-nautilus-khq5m update-demo-nautilus-xzh8q "
Aug 16 21:39:13.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-khq5m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:14.301: INFO: stderr: ""
Aug 16 21:39:14.301: INFO: stdout: "true"
Aug 16 21:39:14.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-khq5m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:15.481: INFO: stderr: ""
Aug 16 21:39:15.481: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 16 21:39:15.481: INFO: validating pod update-demo-nautilus-khq5m
Aug 16 21:39:15.486: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 16 21:39:15.486: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 16 21:39:15.486: INFO: update-demo-nautilus-khq5m is verified up and running
Aug 16 21:39:15.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzh8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:16.679: INFO: stderr: ""
Aug 16 21:39:16.679: INFO: stdout: "true"
Aug 16 21:39:16.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzh8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:17.856: INFO: stderr: ""
Aug 16 21:39:17.856: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 16 21:39:17.856: INFO: validating pod update-demo-nautilus-xzh8q
Aug 16 21:39:17.862: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 16 21:39:17.862: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 16 21:39:17.862: INFO: update-demo-nautilus-xzh8q is verified up and running
STEP: scaling down the replication controller
Aug 16 21:39:17.868: INFO: scanned /root for discovery docs: 
Aug 16 21:39:17.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9146'
Aug 16 21:39:20.289: INFO: stderr: ""
Aug 16 21:39:20.289: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 16 21:39:20.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146'
Aug 16 21:39:21.611: INFO: stderr: ""
Aug 16 21:39:21.611: INFO: stdout: "update-demo-nautilus-khq5m update-demo-nautilus-xzh8q "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 16 21:39:26.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146'
Aug 16 21:39:27.829: INFO: stderr: ""
Aug 16 21:39:27.829: INFO: stdout: "update-demo-nautilus-xzh8q "
Aug 16 21:39:27.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzh8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:29.008: INFO: stderr: ""
Aug 16 21:39:29.008: INFO: stdout: "true"
Aug 16 21:39:29.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzh8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:30.343: INFO: stderr: ""
Aug 16 21:39:30.343: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 16 21:39:30.343: INFO: validating pod update-demo-nautilus-xzh8q
Aug 16 21:39:30.449: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 16 21:39:30.449: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 16 21:39:30.449: INFO: update-demo-nautilus-xzh8q is verified up and running
STEP: scaling up the replication controller
Aug 16 21:39:30.461: INFO: scanned /root for discovery docs: 
Aug 16 21:39:30.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9146'
Aug 16 21:39:32.731: INFO: stderr: ""
Aug 16 21:39:32.731: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 16 21:39:32.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146'
Aug 16 21:39:33.996: INFO: stderr: ""
Aug 16 21:39:33.996: INFO: stdout: "update-demo-nautilus-89v27 update-demo-nautilus-xzh8q "
Aug 16 21:39:33.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89v27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:35.191: INFO: stderr: ""
Aug 16 21:39:35.191: INFO: stdout: ""
Aug 16 21:39:35.191: INFO: update-demo-nautilus-89v27 is created but not running
Aug 16 21:39:40.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146'
Aug 16 21:39:41.404: INFO: stderr: ""
Aug 16 21:39:41.405: INFO: stdout: "update-demo-nautilus-89v27 update-demo-nautilus-xzh8q "
Aug 16 21:39:41.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89v27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:42.623: INFO: stderr: ""
Aug 16 21:39:42.623: INFO: stdout: "true"
Aug 16 21:39:42.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89v27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:43.857: INFO: stderr: ""
Aug 16 21:39:43.858: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 16 21:39:43.858: INFO: validating pod update-demo-nautilus-89v27
Aug 16 21:39:43.863: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 16 21:39:43.863: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 16 21:39:43.863: INFO: update-demo-nautilus-89v27 is verified up and running
Aug 16 21:39:43.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzh8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:45.086: INFO: stderr: ""
Aug 16 21:39:45.086: INFO: stdout: "true"
Aug 16 21:39:45.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzh8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146'
Aug 16 21:39:46.343: INFO: stderr: ""
Aug 16 21:39:46.343: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 16 21:39:46.343: INFO: validating pod update-demo-nautilus-xzh8q
Aug 16 21:39:46.348: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 16 21:39:46.348: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 16 21:39:46.348: INFO: update-demo-nautilus-xzh8q is verified up and running
STEP: using delete to clean up resources
Aug 16 21:39:46.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9146'
Aug 16 21:39:47.553: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 16 21:39:47.553: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 16 21:39:47.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9146'
Aug 16 21:39:48.801: INFO: stderr: "No resources found in kubectl-9146 namespace.\n"
Aug 16 21:39:48.801: INFO: stdout: ""
Aug 16 21:39:48.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9146 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 16 21:39:50.116: INFO: stderr: ""
Aug 16 21:39:50.116: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:39:50.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9146" for this suite.

• [SLOW TEST:57.182 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":249,"skipped":4229,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:39:50.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:39:52.029: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 16 21:39:54.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210792, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210792, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210792, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210792, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:39:56.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210792, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210792, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210792, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210792, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:39:59.365: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:39:59.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1349" for this suite.
STEP: Destroying namespace "webhook-1349-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.866 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":250,"skipped":4277,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:40:00.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 16 21:40:00.222: INFO: Waiting up to 5m0s for pod "pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7" in namespace "emptydir-2884" to be "success or failure"
Aug 16 21:40:00.622: INFO: Pod "pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7": Phase="Pending", Reason="", readiness=false. Elapsed: 400.168999ms
Aug 16 21:40:02.627: INFO: Pod "pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405207043s
Aug 16 21:40:04.741: INFO: Pod "pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.519214993s
Aug 16 21:40:06.746: INFO: Pod "pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.523701506s
STEP: Saw pod success
Aug 16 21:40:06.746: INFO: Pod "pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7" satisfied condition "success or failure"
Aug 16 21:40:06.750: INFO: Trying to get logs from node jerma-worker pod pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7 container test-container: 
STEP: delete the pod
Aug 16 21:40:06.779: INFO: Waiting for pod pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7 to disappear
Aug 16 21:40:06.783: INFO: Pod pod-e7f569c6-1037-4e47-ade9-fdcfd1aa2df7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:40:06.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2884" for this suite.

• [SLOW TEST:6.718 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4278,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:40:06.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:40:06.912: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:40:07.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5637" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":252,"skipped":4285,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:40:07.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Aug 16 21:40:07.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-11 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 16 21:40:13.631: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0816 21:40:13.286496    4409 log.go:172] (0x4000af2210) (0x4000803b80) Create stream\nI0816 21:40:13.290018    4409 log.go:172] (0x4000af2210) (0x4000803b80) Stream added, broadcasting: 1\nI0816 21:40:13.298669    4409 log.go:172] (0x4000af2210) Reply frame received for 1\nI0816 21:40:13.299132    4409 log.go:172] (0x4000af2210) (0x40005fc000) Create stream\nI0816 21:40:13.299184    4409 log.go:172] (0x4000af2210) (0x40005fc000) Stream added, broadcasting: 3\nI0816 21:40:13.300271    4409 log.go:172] (0x4000af2210) Reply frame received for 3\nI0816 21:40:13.300593    4409 log.go:172] (0x4000af2210) (0x4000803c20) Create stream\nI0816 21:40:13.300658    4409 log.go:172] (0x4000af2210) (0x4000803c20) Stream added, broadcasting: 5\nI0816 21:40:13.301814    4409 log.go:172] (0x4000af2210) Reply frame received for 5\nI0816 21:40:13.302147    4409 log.go:172] (0x4000af2210) (0x40005fc0a0) Create stream\nI0816 21:40:13.302243    4409 log.go:172] (0x4000af2210) (0x40005fc0a0) Stream added, broadcasting: 7\nI0816 21:40:13.303434    4409 log.go:172] (0x4000af2210) Reply frame received for 7\nI0816 21:40:13.305850    4409 log.go:172] (0x40005fc000) (3) Writing data frame\nI0816 21:40:13.306978    4409 log.go:172] (0x40005fc000) (3) Writing data frame\nI0816 21:40:13.307954    4409 log.go:172] (0x4000af2210) Data frame received for 5\nI0816 21:40:13.308261    4409 log.go:172] (0x4000803c20) (5) Data frame handling\nI0816 21:40:13.308705    4409 log.go:172] (0x4000803c20) (5) Data frame sent\nI0816 21:40:13.309204    4409 log.go:172] (0x4000af2210) Data frame received for 5\nI0816 21:40:13.309281    4409 log.go:172] (0x4000803c20) (5) Data frame handling\nI0816 21:40:13.309376    4409 log.go:172] (0x4000803c20) (5) Data frame sent\nI0816 21:40:13.344212    4409 log.go:172] (0x4000af2210) Data frame received for 5\nI0816 21:40:13.344295    4409 log.go:172] (0x4000803c20) (5) Data frame handling\nI0816 21:40:13.344632    4409 log.go:172] (0x4000af2210) Data frame received for 7\nI0816 21:40:13.344706    4409 log.go:172] (0x40005fc0a0) (7) Data frame handling\nI0816 21:40:13.345077    4409 log.go:172] (0x4000af2210) Data frame received for 1\nI0816 21:40:13.345233    4409 log.go:172] (0x4000803b80) (1) Data frame handling\nI0816 21:40:13.345412    4409 log.go:172] (0x4000803b80) (1) Data frame sent\nI0816 21:40:13.346530    4409 log.go:172] (0x4000af2210) (0x4000803b80) Stream removed, broadcasting: 1\nI0816 21:40:13.347292    4409 log.go:172] (0x4000af2210) (0x40005fc000) Stream removed, broadcasting: 3\nI0816 21:40:13.348389    4409 log.go:172] (0x4000af2210) Go away received\nI0816 21:40:13.350209    4409 log.go:172] (0x4000af2210) (0x4000803b80) Stream removed, broadcasting: 1\nI0816 21:40:13.350860    4409 log.go:172] (0x4000af2210) (0x40005fc000) Stream removed, broadcasting: 3\nI0816 21:40:13.350967    4409 log.go:172] (0x4000af2210) (0x4000803c20) Stream removed, broadcasting: 5\nI0816 21:40:13.351705    4409 log.go:172] (0x4000af2210) (0x40005fc0a0) Stream removed, broadcasting: 7\n"
Aug 16 21:40:13.633: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:40:15.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-11" for this suite.

• [SLOW TEST:8.103 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":253,"skipped":4291,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:40:15.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Aug 16 21:40:16.286: INFO: created pod pod-service-account-defaultsa
Aug 16 21:40:16.287: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 16 21:40:16.347: INFO: created pod pod-service-account-mountsa
Aug 16 21:40:16.347: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 16 21:40:16.361: INFO: created pod pod-service-account-nomountsa
Aug 16 21:40:16.361: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 16 21:40:16.367: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 16 21:40:16.367: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 16 21:40:16.406: INFO: created pod pod-service-account-mountsa-mountspec
Aug 16 21:40:16.406: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 16 21:40:16.440: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 16 21:40:16.440: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 16 21:40:16.481: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 16 21:40:16.481: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 16 21:40:16.521: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 16 21:40:16.521: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 16 21:40:16.559: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 16 21:40:16.559: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:40:16.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1939" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":254,"skipped":4299,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:40:16.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:40:16.774: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 16 21:40:16.795: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 16 21:40:21.942: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 16 21:40:32.632: INFO: Creating deployment "test-rolling-update-deployment"
Aug 16 21:40:33.066: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 16 21:40:33.614: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set
Aug 16 21:40:36.040: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 16 21:40:36.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210834, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210834, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210834, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210833, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:40:38.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210834, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210834, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210834, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733210833, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 16 21:40:40.133: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 16 21:40:40.313: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-1782 /apis/apps/v1/namespaces/deployment-1782/deployments/test-rolling-update-deployment 7c68a481-8efe-4d0e-b4f3-acd723f23b3f 513471 1 2020-08-16 21:40:32 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002f8c8f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-16 21:40:34 +0000 UTC,LastTransitionTime:2020-08-16 21:40:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-16 21:40:39 +0000 UTC,LastTransitionTime:2020-08-16 21:40:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 16 21:40:40.320: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-1782 /apis/apps/v1/namespaces/deployment-1782/replicasets/test-rolling-update-deployment-67cf4f6444 d0337835-b0f8-41b4-9e7e-732c0787bda0 513460 1 2020-08-16 21:40:33 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 7c68a481-8efe-4d0e-b4f3-acd723f23b3f 0x4005a5b1a7 0x4005a5b1a8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4005a5b218  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 16 21:40:40.320: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 16 21:40:40.321: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-1782 /apis/apps/v1/namespaces/deployment-1782/replicasets/test-rolling-update-controller b6a7dac3-8e68-4ea8-aa56-3ca7d3cf7f83 513470 2 2020-08-16 21:40:16 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 7c68a481-8efe-4d0e-b4f3-acd723f23b3f 0x4005a5b0d7 0x4005a5b0d8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4005a5b138  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 16 21:40:40.327: INFO: Pod "test-rolling-update-deployment-67cf4f6444-rlv5z" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-rlv5z test-rolling-update-deployment-67cf4f6444- deployment-1782 /api/v1/namespaces/deployment-1782/pods/test-rolling-update-deployment-67cf4f6444-rlv5z e2864bdb-5017-46de-9a44-f305650a425f 513459 0 2020-08-16 21:40:33 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 d0337835-b0f8-41b4-9e7e-732c0787bda0 0x4005a5b697 0x4005a5b698}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p8hbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p8hbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p8hbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:40:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:40:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.32,StartTime:2020-08-16 21:40:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:40:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://56f92a95bc275632f29993758469e6aa1e91ff4503f255bddd85b9ddc634b379,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:40:40.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1782" for this suite.

• [SLOW TEST:23.692 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":255,"skipped":4299,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:40:40.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0816 21:40:53.908627       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 16 21:40:53.908: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:40:53.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6207" for this suite.

• [SLOW TEST:14.309 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":256,"skipped":4300,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:40:54.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 16 21:41:08.007: INFO: Successfully updated pod "annotationupdatebcfaf4d4-ee09-4b8a-88e4-7b6c3d48dc0b"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:41:10.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3281" for this suite.

• [SLOW TEST:16.333 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4334,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:41:10.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-1a4b1969-2416-4471-8cf9-027445f88e26
STEP: Creating a pod to test consume configMaps
Aug 16 21:41:12.567: INFO: Waiting up to 5m0s for pod "pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d" in namespace "configmap-6246" to be "success or failure"
Aug 16 21:41:12.900: INFO: Pod "pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 332.376494ms
Aug 16 21:41:14.906: INFO: Pod "pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338925114s
Aug 16 21:41:17.047: INFO: Pod "pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479839918s
Aug 16 21:41:19.052: INFO: Pod "pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d": Phase="Running", Reason="", readiness=true. Elapsed: 6.48509017s
Aug 16 21:41:21.058: INFO: Pod "pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.490963533s
STEP: Saw pod success
Aug 16 21:41:21.059: INFO: Pod "pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d" satisfied condition "success or failure"
Aug 16 21:41:21.067: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d container configmap-volume-test: 
STEP: delete the pod
Aug 16 21:41:21.093: INFO: Waiting for pod pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d to disappear
Aug 16 21:41:21.140: INFO: Pod pod-configmaps-f41919f4-c693-4456-88ea-fe73de8a2b7d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:41:21.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6246" for this suite.

• [SLOW TEST:10.167 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4350,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:41:21.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:41:21.215: INFO: Creating deployment "webserver-deployment"
Aug 16 21:41:21.251: INFO: Waiting for observed generation 1
Aug 16 21:41:23.468: INFO: Waiting for all required pods to come up
Aug 16 21:41:23.476: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 16 21:41:33.525: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 16 21:41:33.537: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 16 21:41:33.545: INFO: Updating deployment webserver-deployment
Aug 16 21:41:33.545: INFO: Waiting for observed generation 2
Aug 16 21:41:36.178: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 16 21:41:37.334: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 16 21:41:37.418: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 16 21:41:38.047: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 16 21:41:38.047: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 16 21:41:38.053: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 16 21:41:38.060: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 16 21:41:38.061: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 16 21:41:38.070: INFO: Updating deployment webserver-deployment
Aug 16 21:41:38.071: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 16 21:41:38.872: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 16 21:41:41.684: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 16 21:41:41.935: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-2957 /apis/apps/v1/namespaces/deployment-2957/deployments/webserver-deployment f3666a46-4f6a-4f34-8b4b-abf32b191d59 514107 3 2020-08-16 21:41:21 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40038d8ad8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-16 21:41:38 +0000 UTC,LastTransitionTime:2020-08-16 21:41:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-16 21:41:39 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 16 21:41:42.570: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-2957 /apis/apps/v1/namespaces/deployment-2957/replicasets/webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 514105 3 2020-08-16 21:41:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f3666a46-4f6a-4f34-8b4b-abf32b191d59 0x40049edde7 0x40049edde8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40049ede58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 16 21:41:42.570: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 16 21:41:42.571: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-2957 /apis/apps/v1/namespaces/deployment-2957/replicasets/webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 514098 3 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f3666a46-4f6a-4f34-8b4b-abf32b191d59 0x40049edd27 0x40049edd28}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40049edd88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 16 21:41:42.624: INFO: Pod "webserver-deployment-595b5b9587-4fht7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4fht7 webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-4fht7 d6cb6303-b346-4b71-853c-cbfd8716c95a 514128 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0e357 0x4003a0e358}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.626: INFO: Pod "webserver-deployment-595b5b9587-52ctr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-52ctr webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-52ctr f12b1b76-d793-464d-af90-81f3933bf487 513958 0 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0e4b0 0x4003a0e4b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.42,StartTime:2020-08-16 21:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:41:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e86bda0beb8b134e08851e81c62e4295fa87bcc04b2b80c137816b18ffef2341,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.627: INFO: Pod "webserver-deployment-595b5b9587-6v4lj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6v4lj webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-6v4lj b300c2e7-46fb-4ebf-b40e-c7c7c3b5fc69 514138 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0e620 0x4003a0e621}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.628: INFO: Pod "webserver-deployment-595b5b9587-75vsc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-75vsc webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-75vsc 84ec1f2a-ae64-4766-a666-e2429c138476 513952 0 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0e770 0x4003a0e771}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.41,StartTime:2020-08-16 21:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:41:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://50bb4f912d94a66cde0d82eed0883bbbda7afa242ff944f5b58a43dd6d8b0aac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.629: INFO: Pod "webserver-deployment-595b5b9587-7v7fc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7v7fc webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-7v7fc b3c6192f-4781-4ec3-b424-241d47ddefa6 514084 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0e8e0 0x4003a0e8e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.630: INFO: Pod "webserver-deployment-595b5b9587-9j29l" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9j29l webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-9j29l acaa1560-6cfd-42e9-842e-e61ea231903c 514170 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0ea30 0x4003a0ea31}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.631: INFO: Pod "webserver-deployment-595b5b9587-cp75w" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cp75w webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-cp75w 159f0d5e-119e-4168-8dbf-52f45535ef19 514115 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0eb80 0x4003a0eb81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.633: INFO: Pod "webserver-deployment-595b5b9587-dkf6f" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dkf6f webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-dkf6f ba9012d0-8e55-4e92-a9e9-27daab2f9bfe 513907 0 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0ecd0 0x4003a0ecd1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.62,StartTime:2020-08-16 21:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:41:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://778c0ce3175298c831e6386a73193fad5aef232575c310e2848599851fc9ba38,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.634: INFO: Pod "webserver-deployment-595b5b9587-hxs2q" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hxs2q webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-hxs2q 5915d8ea-3237-4092-98b1-af97c14bf574 513959 0 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0ee50 0x4003a0ee51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.66,StartTime:2020-08-16 21:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:41:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4a1ff3c0c536313325b91a00cf66905fd633abfeba6bd0f552965823da5fc8ba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.636: INFO: Pod "webserver-deployment-595b5b9587-lwlg6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lwlg6 webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-lwlg6 8915e08d-d893-4abe-bf9a-e927467dc3e6 514166 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0efd0 0x4003a0efd1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.637: INFO: Pod "webserver-deployment-595b5b9587-lwtzv" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lwtzv webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-lwtzv 94a02009-c1b3-49c7-93df-2295e2c72966 513951 0 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0f130 0x4003a0f131}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.63,StartTime:2020-08-16 21:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:41:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b9519a2509132c9c83f60bfcbc51891a348e4b1456786df7ffc86cf7133d4cd4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.638: INFO: Pod "webserver-deployment-595b5b9587-nr67r" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nr67r webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-nr67r 7ca979aa-5de6-4c62-9a15-be5778a3f773 514135 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0f2d0 0x4003a0f2d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.639: INFO: Pod "webserver-deployment-595b5b9587-ph7cn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ph7cn webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-ph7cn ce1cf9fb-a8a8-4f5e-8a95-67c734642b43 514163 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0f460 0x4003a0f461}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.641: INFO: Pod "webserver-deployment-595b5b9587-pw68h" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pw68h webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-pw68h a62a31c6-432d-49e8-9c71-21ad28645d87 513935 0 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0f5e0 0x4003a0f5e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.40,StartTime:2020-08-16 21:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:41:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e620387987bd86a9c167ca70b8f27e83e40b36457f21b86d588470cc42f8b0e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.642: INFO: Pod "webserver-deployment-595b5b9587-r6m5m" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-r6m5m webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-r6m5m 1dad8aeb-b798-4e5b-8447-aedb7af05da9 514104 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0f840 0x4003a0f841}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.643: INFO: Pod "webserver-deployment-595b5b9587-rzpfd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rzpfd webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-rzpfd ae9a59a2-2f91-4592-819c-c87b379f50de 514173 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0f990 0x4003a0f991}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.644: INFO: Pod "webserver-deployment-595b5b9587-sjttg" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sjttg webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-sjttg 4bc2d79c-6bfd-45f6-af0e-3dfe30ecdcbe 513930 0 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0faf0 0x4003a0faf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.39,StartTime:2020-08-16 21:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:41:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://22ec593b50ea622f74068a3d71359cc267c5846378d05c2e91450b733ee91a91,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.645: INFO: Pod "webserver-deployment-595b5b9587-v4kmq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-v4kmq webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-v4kmq d37d945f-0c8e-43db-8582-86742fc2dc4d 514131 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0fc60 0x4003a0fc61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.646: INFO: Pod "webserver-deployment-595b5b9587-v7p7x" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-v7p7x webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-v7p7x aa321140-a042-4270-854b-f5f5bf44a9fa 513946 0 2020-08-16 21:41:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0fdb0 0x4003a0fdb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.64,StartTime:2020-08-16 21:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-16 21:41:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://505a99ed39d15095a1e138cbd544f9b810390a2a6c8a4c5b1c8f10ad0471570e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.647: INFO: Pod "webserver-deployment-595b5b9587-zxmhd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zxmhd webserver-deployment-595b5b9587- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-595b5b9587-zxmhd 5ddb8766-4009-46a2-932e-8e9f872c54bd 514117 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d164eb08-d28a-4970-b741-dc6ecb789e67 0x4003a0ff20 0x4003a0ff21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.648: INFO: Pod "webserver-deployment-c7997dcc8-252wp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-252wp webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-252wp d20f4736-f91a-4777-b37d-11ac1c7da1bb 514031 0 2020-08-16 21:41:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdc090 0x4003fdc091}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.649: INFO: Pod "webserver-deployment-c7997dcc8-44rfh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-44rfh webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-44rfh 1f348f55-b225-44bf-aee8-d8dfeb460f1b 514145 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdc240 0x4003fdc241}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.650: INFO: Pod "webserver-deployment-c7997dcc8-6cb4q" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6cb4q webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-6cb4q d8b11a56-f741-4074-bdab-03b2eb1a2cab 514181 0 2020-08-16 21:41:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdc3d0 0x4003fdc3d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.45,StartTime:2020-08-16 21:41:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.651: INFO: Pod "webserver-deployment-c7997dcc8-7784s" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7784s webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-7784s b4d131f4-b693-4696-aec7-fcf9b7ca7bf7 514152 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdc580 0x4003fdc581}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.652: INFO: Pod "webserver-deployment-c7997dcc8-84pwc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-84pwc webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-84pwc 10736e66-0717-4109-85ed-0416c2108a16 514182 0 2020-08-16 21:41:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdc710 0x4003fdc711}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.68,StartTime:2020-08-16 21:41:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.653: INFO: Pod "webserver-deployment-c7997dcc8-89f7c" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-89f7c webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-89f7c 12b8c6d3-5011-44ed-ad36-0e9aeacbfc6c 514062 0 2020-08-16 21:41:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdc8d0 0x4003fdc8d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.67,StartTime:2020-08-16 21:41:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.654: INFO: Pod "webserver-deployment-c7997dcc8-bq5sg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bq5sg webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-bq5sg c20db07d-db8a-460e-965c-e50f7327e2aa 514124 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdca70 0x4003fdca71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.655: INFO: Pod "webserver-deployment-c7997dcc8-frmxr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-frmxr webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-frmxr d39683d3-8e68-439e-9d58-3dac8b88c3ea 514175 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdcbe0 0x4003fdcbe1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.657: INFO: Pod "webserver-deployment-c7997dcc8-n8rrg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n8rrg webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-n8rrg 7fb8a5ea-eb99-40fc-bfcd-df7b70ffe279 514143 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdcd60 0x4003fdcd61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.658: INFO: Pod "webserver-deployment-c7997dcc8-qp5dl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qp5dl webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-qp5dl 78da5e23-eb21-4b71-ac2f-cae29fef7e41 514154 0 2020-08-16 21:41:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdcef0 0x4003fdcef1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.658: INFO: Pod "webserver-deployment-c7997dcc8-tc247" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tc247 webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-tc247 03c6404a-4249-4bf0-a007-caba2c119cf3 514108 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdd080 0x4003fdd081}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.659: INFO: Pod "webserver-deployment-c7997dcc8-zbfs5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zbfs5 webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-zbfs5 91934f13-7c6c-4b3f-a64a-de1c7c6401c3 514176 0 2020-08-16 21:41:33 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdd230 0x4003fdd231}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.44,StartTime:2020-08-16 21:41:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 16 21:41:42.660: INFO: Pod "webserver-deployment-c7997dcc8-zjjls" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zjjls webserver-deployment-c7997dcc8- deployment-2957 /api/v1/namespaces/deployment-2957/pods/webserver-deployment-c7997dcc8-zjjls 6c9f4ef6-1cd1-47d0-b5fb-ce92dff66b83 514122 0 2020-08-16 21:41:38 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 3a6ff36a-96ce-4455-a6ca-8cae9cac4a72 0x4003fdd3f0 0x4003fdd3f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml8lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml8lm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-16 21:41:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-16 21:41:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:41:42.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2957" for this suite.

• [SLOW TEST:21.665 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":259,"skipped":4353,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:41:42.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 16 21:42:36.669: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:36.669: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:37.194270       7 log.go:172] (0x4002efc160) (0x40021ec280) Create stream
I0816 21:42:37.194453       7 log.go:172] (0x4002efc160) (0x40021ec280) Stream added, broadcasting: 1
I0816 21:42:37.198711       7 log.go:172] (0x4002efc160) Reply frame received for 1
I0816 21:42:37.198884       7 log.go:172] (0x4002efc160) (0x40011d54a0) Create stream
I0816 21:42:37.198965       7 log.go:172] (0x4002efc160) (0x40011d54a0) Stream added, broadcasting: 3
I0816 21:42:37.200091       7 log.go:172] (0x4002efc160) Reply frame received for 3
I0816 21:42:37.200200       7 log.go:172] (0x4002efc160) (0x40021ec3c0) Create stream
I0816 21:42:37.200257       7 log.go:172] (0x4002efc160) (0x40021ec3c0) Stream added, broadcasting: 5
I0816 21:42:37.201576       7 log.go:172] (0x4002efc160) Reply frame received for 5
I0816 21:42:37.263473       7 log.go:172] (0x4002efc160) Data frame received for 5
I0816 21:42:37.263613       7 log.go:172] (0x40021ec3c0) (5) Data frame handling
I0816 21:42:37.263773       7 log.go:172] (0x4002efc160) Data frame received for 3
I0816 21:42:37.263903       7 log.go:172] (0x40011d54a0) (3) Data frame handling
I0816 21:42:37.264030       7 log.go:172] (0x40011d54a0) (3) Data frame sent
I0816 21:42:37.264131       7 log.go:172] (0x4002efc160) Data frame received for 3
I0816 21:42:37.264219       7 log.go:172] (0x40011d54a0) (3) Data frame handling
I0816 21:42:37.265280       7 log.go:172] (0x4002efc160) Data frame received for 1
I0816 21:42:37.265357       7 log.go:172] (0x40021ec280) (1) Data frame handling
I0816 21:42:37.265441       7 log.go:172] (0x40021ec280) (1) Data frame sent
I0816 21:42:37.265593       7 log.go:172] (0x4002efc160) (0x40021ec280) Stream removed, broadcasting: 1
I0816 21:42:37.265717       7 log.go:172] (0x4002efc160) Go away received
I0816 21:42:37.266171       7 log.go:172] (0x4002efc160) (0x40021ec280) Stream removed, broadcasting: 1
I0816 21:42:37.266263       7 log.go:172] (0x4002efc160) (0x40011d54a0) Stream removed, broadcasting: 3
I0816 21:42:37.266359       7 log.go:172] (0x4002efc160) (0x40021ec3c0) Stream removed, broadcasting: 5
Aug 16 21:42:37.266: INFO: Exec stderr: ""
Aug 16 21:42:37.266: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:37.266: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:37.540408       7 log.go:172] (0x40032864d0) (0x400165c460) Create stream
I0816 21:42:37.540583       7 log.go:172] (0x40032864d0) (0x400165c460) Stream added, broadcasting: 1
I0816 21:42:37.544662       7 log.go:172] (0x40032864d0) Reply frame received for 1
I0816 21:42:37.544933       7 log.go:172] (0x40032864d0) (0x400146e280) Create stream
I0816 21:42:37.545017       7 log.go:172] (0x40032864d0) (0x400146e280) Stream added, broadcasting: 3
I0816 21:42:37.546549       7 log.go:172] (0x40032864d0) Reply frame received for 3
I0816 21:42:37.546699       7 log.go:172] (0x40032864d0) (0x400146e460) Create stream
I0816 21:42:37.546763       7 log.go:172] (0x40032864d0) (0x400146e460) Stream added, broadcasting: 5
I0816 21:42:37.548062       7 log.go:172] (0x40032864d0) Reply frame received for 5
I0816 21:42:37.603873       7 log.go:172] (0x40032864d0) Data frame received for 5
I0816 21:42:37.604021       7 log.go:172] (0x400146e460) (5) Data frame handling
I0816 21:42:37.604190       7 log.go:172] (0x40032864d0) Data frame received for 3
I0816 21:42:37.604354       7 log.go:172] (0x400146e280) (3) Data frame handling
I0816 21:42:37.604488       7 log.go:172] (0x400146e280) (3) Data frame sent
I0816 21:42:37.604595       7 log.go:172] (0x40032864d0) Data frame received for 3
I0816 21:42:37.604885       7 log.go:172] (0x400146e280) (3) Data frame handling
I0816 21:42:37.605492       7 log.go:172] (0x40032864d0) Data frame received for 1
I0816 21:42:37.605657       7 log.go:172] (0x400165c460) (1) Data frame handling
I0816 21:42:37.605814       7 log.go:172] (0x400165c460) (1) Data frame sent
I0816 21:42:37.606003       7 log.go:172] (0x40032864d0) (0x400165c460) Stream removed, broadcasting: 1
I0816 21:42:37.606204       7 log.go:172] (0x40032864d0) Go away received
I0816 21:42:37.606745       7 log.go:172] (0x40032864d0) (0x400165c460) Stream removed, broadcasting: 1
I0816 21:42:37.606948       7 log.go:172] (0x40032864d0) (0x400146e280) Stream removed, broadcasting: 3
I0816 21:42:37.607102       7 log.go:172] (0x40032864d0) (0x400146e460) Stream removed, broadcasting: 5
Aug 16 21:42:37.607: INFO: Exec stderr: ""
Aug 16 21:42:37.607: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:37.607: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:41.805745       7 log.go:172] (0x4003286bb0) (0x400165cf00) Create stream
I0816 21:42:41.805900       7 log.go:172] (0x4003286bb0) (0x400165cf00) Stream added, broadcasting: 1
I0816 21:42:41.808968       7 log.go:172] (0x4003286bb0) Reply frame received for 1
I0816 21:42:41.809144       7 log.go:172] (0x4003286bb0) (0x400034c640) Create stream
I0816 21:42:41.809223       7 log.go:172] (0x4003286bb0) (0x400034c640) Stream added, broadcasting: 3
I0816 21:42:41.810248       7 log.go:172] (0x4003286bb0) Reply frame received for 3
I0816 21:42:41.810348       7 log.go:172] (0x4003286bb0) (0x400165d040) Create stream
I0816 21:42:41.810405       7 log.go:172] (0x4003286bb0) (0x400165d040) Stream added, broadcasting: 5
I0816 21:42:41.811223       7 log.go:172] (0x4003286bb0) Reply frame received for 5
I0816 21:42:41.859599       7 log.go:172] (0x4003286bb0) Data frame received for 5
I0816 21:42:41.859748       7 log.go:172] (0x400165d040) (5) Data frame handling
I0816 21:42:41.859877       7 log.go:172] (0x4003286bb0) Data frame received for 3
I0816 21:42:41.859943       7 log.go:172] (0x400034c640) (3) Data frame handling
I0816 21:42:41.860012       7 log.go:172] (0x400034c640) (3) Data frame sent
I0816 21:42:41.860057       7 log.go:172] (0x4003286bb0) Data frame received for 3
I0816 21:42:41.860100       7 log.go:172] (0x400034c640) (3) Data frame handling
I0816 21:42:41.860652       7 log.go:172] (0x4003286bb0) Data frame received for 1
I0816 21:42:41.860703       7 log.go:172] (0x400165cf00) (1) Data frame handling
I0816 21:42:41.860807       7 log.go:172] (0x400165cf00) (1) Data frame sent
I0816 21:42:41.860868       7 log.go:172] (0x4003286bb0) (0x400165cf00) Stream removed, broadcasting: 1
I0816 21:42:41.861125       7 log.go:172] (0x4003286bb0) Go away received
I0816 21:42:41.861215       7 log.go:172] (0x4003286bb0) (0x400165cf00) Stream removed, broadcasting: 1
I0816 21:42:41.861314       7 log.go:172] (0x4003286bb0) (0x400034c640) Stream removed, broadcasting: 3
I0816 21:42:41.861383       7 log.go:172] (0x4003286bb0) (0x400165d040) Stream removed, broadcasting: 5
Aug 16 21:42:41.861: INFO: Exec stderr: ""
Aug 16 21:42:41.861: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:41.861: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:42.965559       7 log.go:172] (0x4002912580) (0x4001008780) Create stream
I0816 21:42:42.965734       7 log.go:172] (0x4002912580) (0x4001008780) Stream added, broadcasting: 1
I0816 21:42:42.970489       7 log.go:172] (0x4002912580) Reply frame received for 1
I0816 21:42:42.970647       7 log.go:172] (0x4002912580) (0x400165d0e0) Create stream
I0816 21:42:42.970734       7 log.go:172] (0x4002912580) (0x400165d0e0) Stream added, broadcasting: 3
I0816 21:42:42.972387       7 log.go:172] (0x4002912580) Reply frame received for 3
I0816 21:42:42.972577       7 log.go:172] (0x4002912580) (0x400165d220) Create stream
I0816 21:42:42.972679       7 log.go:172] (0x4002912580) (0x400165d220) Stream added, broadcasting: 5
I0816 21:42:42.974095       7 log.go:172] (0x4002912580) Reply frame received for 5
I0816 21:42:43.041399       7 log.go:172] (0x4002912580) Data frame received for 3
I0816 21:42:43.041642       7 log.go:172] (0x400165d0e0) (3) Data frame handling
I0816 21:42:43.041868       7 log.go:172] (0x400165d0e0) (3) Data frame sent
I0816 21:42:43.042179       7 log.go:172] (0x4002912580) Data frame received for 3
I0816 21:42:43.042260       7 log.go:172] (0x400165d0e0) (3) Data frame handling
I0816 21:42:43.042361       7 log.go:172] (0x4002912580) Data frame received for 5
I0816 21:42:43.042548       7 log.go:172] (0x400165d220) (5) Data frame handling
I0816 21:42:43.042764       7 log.go:172] (0x4002912580) Data frame received for 1
I0816 21:42:43.042897       7 log.go:172] (0x4001008780) (1) Data frame handling
I0816 21:42:43.042989       7 log.go:172] (0x4001008780) (1) Data frame sent
I0816 21:42:43.043076       7 log.go:172] (0x4002912580) (0x4001008780) Stream removed, broadcasting: 1
I0816 21:42:43.043176       7 log.go:172] (0x4002912580) Go away received
I0816 21:42:43.043451       7 log.go:172] (0x4002912580) (0x4001008780) Stream removed, broadcasting: 1
I0816 21:42:43.043571       7 log.go:172] (0x4002912580) (0x400165d0e0) Stream removed, broadcasting: 3
I0816 21:42:43.043649       7 log.go:172] (0x4002912580) (0x400165d220) Stream removed, broadcasting: 5
Aug 16 21:42:43.043: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 16 21:42:43.044: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:43.044: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:43.283894       7 log.go:172] (0x40032871e0) (0x400165db80) Create stream
I0816 21:42:43.284012       7 log.go:172] (0x40032871e0) (0x400165db80) Stream added, broadcasting: 1
I0816 21:42:43.287146       7 log.go:172] (0x40032871e0) Reply frame received for 1
I0816 21:42:43.287310       7 log.go:172] (0x40032871e0) (0x40021ec500) Create stream
I0816 21:42:43.287413       7 log.go:172] (0x40032871e0) (0x40021ec500) Stream added, broadcasting: 3
I0816 21:42:43.289009       7 log.go:172] (0x40032871e0) Reply frame received for 3
I0816 21:42:43.289131       7 log.go:172] (0x40032871e0) (0x400165dc20) Create stream
I0816 21:42:43.289189       7 log.go:172] (0x40032871e0) (0x400165dc20) Stream added, broadcasting: 5
I0816 21:42:43.290116       7 log.go:172] (0x40032871e0) Reply frame received for 5
I0816 21:42:43.344801       7 log.go:172] (0x40032871e0) Data frame received for 3
I0816 21:42:43.344944       7 log.go:172] (0x40021ec500) (3) Data frame handling
I0816 21:42:43.345037       7 log.go:172] (0x40032871e0) Data frame received for 5
I0816 21:42:43.345166       7 log.go:172] (0x400165dc20) (5) Data frame handling
I0816 21:42:43.345290       7 log.go:172] (0x40021ec500) (3) Data frame sent
I0816 21:42:43.345398       7 log.go:172] (0x40032871e0) Data frame received for 3
I0816 21:42:43.345466       7 log.go:172] (0x40021ec500) (3) Data frame handling
I0816 21:42:43.345948       7 log.go:172] (0x40032871e0) Data frame received for 1
I0816 21:42:43.346023       7 log.go:172] (0x400165db80) (1) Data frame handling
I0816 21:42:43.346105       7 log.go:172] (0x400165db80) (1) Data frame sent
I0816 21:42:43.346189       7 log.go:172] (0x40032871e0) (0x400165db80) Stream removed, broadcasting: 1
I0816 21:42:43.346310       7 log.go:172] (0x40032871e0) Go away received
I0816 21:42:43.346527       7 log.go:172] (0x40032871e0) (0x400165db80) Stream removed, broadcasting: 1
I0816 21:42:43.346613       7 log.go:172] (0x40032871e0) (0x40021ec500) Stream removed, broadcasting: 3
I0816 21:42:43.346690       7 log.go:172] (0x40032871e0) (0x400165dc20) Stream removed, broadcasting: 5
Aug 16 21:42:43.346: INFO: Exec stderr: ""
Aug 16 21:42:43.346: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:43.346: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:43.830913       7 log.go:172] (0x4003287810) (0x4000457c20) Create stream
I0816 21:42:43.831014       7 log.go:172] (0x4003287810) (0x4000457c20) Stream added, broadcasting: 1
I0816 21:42:43.833762       7 log.go:172] (0x4003287810) Reply frame received for 1
I0816 21:42:43.833879       7 log.go:172] (0x4003287810) (0x4001008aa0) Create stream
I0816 21:42:43.833927       7 log.go:172] (0x4003287810) (0x4001008aa0) Stream added, broadcasting: 3
I0816 21:42:43.834955       7 log.go:172] (0x4003287810) Reply frame received for 3
I0816 21:42:43.835037       7 log.go:172] (0x4003287810) (0x4000457cc0) Create stream
I0816 21:42:43.835082       7 log.go:172] (0x4003287810) (0x4000457cc0) Stream added, broadcasting: 5
I0816 21:42:43.835807       7 log.go:172] (0x4003287810) Reply frame received for 5
I0816 21:42:43.871927       7 log.go:172] (0x4003287810) Data frame received for 3
I0816 21:42:43.872062       7 log.go:172] (0x4001008aa0) (3) Data frame handling
I0816 21:42:43.872169       7 log.go:172] (0x4003287810) Data frame received for 5
I0816 21:42:43.872322       7 log.go:172] (0x4000457cc0) (5) Data frame handling
I0816 21:42:43.872428       7 log.go:172] (0x4001008aa0) (3) Data frame sent
I0816 21:42:43.872552       7 log.go:172] (0x4003287810) Data frame received for 3
I0816 21:42:43.872628       7 log.go:172] (0x4001008aa0) (3) Data frame handling
I0816 21:42:43.872819       7 log.go:172] (0x4003287810) Data frame received for 1
I0816 21:42:43.872896       7 log.go:172] (0x4000457c20) (1) Data frame handling
I0816 21:42:43.872971       7 log.go:172] (0x4000457c20) (1) Data frame sent
I0816 21:42:43.873063       7 log.go:172] (0x4003287810) (0x4000457c20) Stream removed, broadcasting: 1
I0816 21:42:43.873158       7 log.go:172] (0x4003287810) Go away received
I0816 21:42:43.873409       7 log.go:172] (0x4003287810) (0x4000457c20) Stream removed, broadcasting: 1
I0816 21:42:43.873494       7 log.go:172] (0x4003287810) (0x4001008aa0) Stream removed, broadcasting: 3
I0816 21:42:43.873562       7 log.go:172] (0x4003287810) (0x4000457cc0) Stream removed, broadcasting: 5
Aug 16 21:42:43.873: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 16 21:42:43.873: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:43.873: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:44.130895       7 log.go:172] (0x400461a420) (0x400146f2c0) Create stream
I0816 21:42:44.130991       7 log.go:172] (0x400461a420) (0x400146f2c0) Stream added, broadcasting: 1
I0816 21:42:44.133932       7 log.go:172] (0x400461a420) Reply frame received for 1
I0816 21:42:44.134071       7 log.go:172] (0x400461a420) (0x40011d5680) Create stream
I0816 21:42:44.134141       7 log.go:172] (0x400461a420) (0x40011d5680) Stream added, broadcasting: 3
I0816 21:42:44.135341       7 log.go:172] (0x400461a420) Reply frame received for 3
I0816 21:42:44.135453       7 log.go:172] (0x400461a420) (0x40011d57c0) Create stream
I0816 21:42:44.135514       7 log.go:172] (0x400461a420) (0x40011d57c0) Stream added, broadcasting: 5
I0816 21:42:44.137020       7 log.go:172] (0x400461a420) Reply frame received for 5
I0816 21:42:44.196053       7 log.go:172] (0x400461a420) Data frame received for 3
I0816 21:42:44.196195       7 log.go:172] (0x40011d5680) (3) Data frame handling
I0816 21:42:44.196299       7 log.go:172] (0x40011d5680) (3) Data frame sent
I0816 21:42:44.196376       7 log.go:172] (0x400461a420) Data frame received for 3
I0816 21:42:44.196442       7 log.go:172] (0x40011d5680) (3) Data frame handling
I0816 21:42:44.196553       7 log.go:172] (0x400461a420) Data frame received for 5
I0816 21:42:44.196672       7 log.go:172] (0x40011d57c0) (5) Data frame handling
I0816 21:42:44.199373       7 log.go:172] (0x400461a420) Data frame received for 1
I0816 21:42:44.199489       7 log.go:172] (0x400146f2c0) (1) Data frame handling
I0816 21:42:44.199630       7 log.go:172] (0x400146f2c0) (1) Data frame sent
I0816 21:42:44.199755       7 log.go:172] (0x400461a420) (0x400146f2c0) Stream removed, broadcasting: 1
I0816 21:42:44.199980       7 log.go:172] (0x400461a420) Go away received
I0816 21:42:44.200077       7 log.go:172] (0x400461a420) (0x400146f2c0) Stream removed, broadcasting: 1
I0816 21:42:44.200171       7 log.go:172] (0x400461a420) (0x40011d5680) Stream removed, broadcasting: 3
I0816 21:42:44.200238       7 log.go:172] (0x400461a420) (0x40011d57c0) Stream removed, broadcasting: 5
Aug 16 21:42:44.200: INFO: Exec stderr: ""
Aug 16 21:42:44.200: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:44.200: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:44.285701       7 log.go:172] (0x400461aa50) (0x400146f900) Create stream
I0816 21:42:44.285831       7 log.go:172] (0x400461aa50) (0x400146f900) Stream added, broadcasting: 1
I0816 21:42:44.290240       7 log.go:172] (0x400461aa50) Reply frame received for 1
I0816 21:42:44.290344       7 log.go:172] (0x400461aa50) (0x40021ec5a0) Create stream
I0816 21:42:44.290401       7 log.go:172] (0x400461aa50) (0x40021ec5a0) Stream added, broadcasting: 3
I0816 21:42:44.291348       7 log.go:172] (0x400461aa50) Reply frame received for 3
I0816 21:42:44.291432       7 log.go:172] (0x400461aa50) (0x400146f9a0) Create stream
I0816 21:42:44.291480       7 log.go:172] (0x400461aa50) (0x400146f9a0) Stream added, broadcasting: 5
I0816 21:42:44.292904       7 log.go:172] (0x400461aa50) Reply frame received for 5
I0816 21:42:44.337150       7 log.go:172] (0x400461aa50) Data frame received for 3
I0816 21:42:44.337248       7 log.go:172] (0x40021ec5a0) (3) Data frame handling
I0816 21:42:44.337304       7 log.go:172] (0x40021ec5a0) (3) Data frame sent
I0816 21:42:44.337350       7 log.go:172] (0x400461aa50) Data frame received for 3
I0816 21:42:44.337391       7 log.go:172] (0x40021ec5a0) (3) Data frame handling
I0816 21:42:44.337462       7 log.go:172] (0x400461aa50) Data frame received for 5
I0816 21:42:44.337523       7 log.go:172] (0x400146f9a0) (5) Data frame handling
I0816 21:42:44.338105       7 log.go:172] (0x400461aa50) Data frame received for 1
I0816 21:42:44.338193       7 log.go:172] (0x400146f900) (1) Data frame handling
I0816 21:42:44.338269       7 log.go:172] (0x400146f900) (1) Data frame sent
I0816 21:42:44.338336       7 log.go:172] (0x400461aa50) (0x400146f900) Stream removed, broadcasting: 1
I0816 21:42:44.338412       7 log.go:172] (0x400461aa50) Go away received
I0816 21:42:44.338656       7 log.go:172] (0x400461aa50) (0x400146f900) Stream removed, broadcasting: 1
I0816 21:42:44.338748       7 log.go:172] (0x400461aa50) (0x40021ec5a0) Stream removed, broadcasting: 3
I0816 21:42:44.338803       7 log.go:172] (0x400461aa50) (0x400146f9a0) Stream removed, broadcasting: 5
Aug 16 21:42:44.338: INFO: Exec stderr: ""
Aug 16 21:42:44.338: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:44.339: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:44.411986       7 log.go:172] (0x4002912d10) (0x40010090e0) Create stream
I0816 21:42:44.412137       7 log.go:172] (0x4002912d10) (0x40010090e0) Stream added, broadcasting: 1
I0816 21:42:44.415731       7 log.go:172] (0x4002912d10) Reply frame received for 1
I0816 21:42:44.415914       7 log.go:172] (0x4002912d10) (0x40021ec6e0) Create stream
I0816 21:42:44.416009       7 log.go:172] (0x4002912d10) (0x40021ec6e0) Stream added, broadcasting: 3
I0816 21:42:44.417635       7 log.go:172] (0x4002912d10) Reply frame received for 3
I0816 21:42:44.417791       7 log.go:172] (0x4002912d10) (0x4001009220) Create stream
I0816 21:42:44.417884       7 log.go:172] (0x4002912d10) (0x4001009220) Stream added, broadcasting: 5
I0816 21:42:44.419244       7 log.go:172] (0x4002912d10) Reply frame received for 5
I0816 21:42:44.466620       7 log.go:172] (0x4002912d10) Data frame received for 5
I0816 21:42:44.466725       7 log.go:172] (0x4001009220) (5) Data frame handling
I0816 21:42:44.466839       7 log.go:172] (0x4002912d10) Data frame received for 3
I0816 21:42:44.466935       7 log.go:172] (0x40021ec6e0) (3) Data frame handling
I0816 21:42:44.467073       7 log.go:172] (0x40021ec6e0) (3) Data frame sent
I0816 21:42:44.467239       7 log.go:172] (0x4002912d10) Data frame received for 3
I0816 21:42:44.467358       7 log.go:172] (0x40021ec6e0) (3) Data frame handling
I0816 21:42:44.467536       7 log.go:172] (0x4002912d10) Data frame received for 1
I0816 21:42:44.467683       7 log.go:172] (0x40010090e0) (1) Data frame handling
I0816 21:42:44.467774       7 log.go:172] (0x40010090e0) (1) Data frame sent
I0816 21:42:44.467856       7 log.go:172] (0x4002912d10) (0x40010090e0) Stream removed, broadcasting: 1
I0816 21:42:44.467959       7 log.go:172] (0x4002912d10) Go away received
I0816 21:42:44.468214       7 log.go:172] (0x4002912d10) (0x40010090e0) Stream removed, broadcasting: 1
I0816 21:42:44.468308       7 log.go:172] (0x4002912d10) (0x40021ec6e0) Stream removed, broadcasting: 3
I0816 21:42:44.468394       7 log.go:172] (0x4002912d10) (0x4001009220) Stream removed, broadcasting: 5
Aug 16 21:42:44.468: INFO: Exec stderr: ""
Aug 16 21:42:44.468: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5127 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:42:44.468: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:42:44.537941       7 log.go:172] (0x4002efc8f0) (0x40021ece60) Create stream
I0816 21:42:44.538143       7 log.go:172] (0x4002efc8f0) (0x40021ece60) Stream added, broadcasting: 1
I0816 21:42:44.543075       7 log.go:172] (0x4002efc8f0) Reply frame received for 1
I0816 21:42:44.543278       7 log.go:172] (0x4002efc8f0) (0x40021ecf00) Create stream
I0816 21:42:44.543373       7 log.go:172] (0x4002efc8f0) (0x40021ecf00) Stream added, broadcasting: 3
I0816 21:42:44.545126       7 log.go:172] (0x4002efc8f0) Reply frame received for 3
I0816 21:42:44.545227       7 log.go:172] (0x4002efc8f0) (0x40011d5900) Create stream
I0816 21:42:44.545282       7 log.go:172] (0x4002efc8f0) (0x40011d5900) Stream added, broadcasting: 5
I0816 21:42:44.546333       7 log.go:172] (0x4002efc8f0) Reply frame received for 5
I0816 21:42:44.587523       7 log.go:172] (0x4002efc8f0) Data frame received for 3
I0816 21:42:44.587674       7 log.go:172] (0x40021ecf00) (3) Data frame handling
I0816 21:42:44.587783       7 log.go:172] (0x4002efc8f0) Data frame received for 5
I0816 21:42:44.587926       7 log.go:172] (0x40011d5900) (5) Data frame handling
I0816 21:42:44.588059       7 log.go:172] (0x40021ecf00) (3) Data frame sent
I0816 21:42:44.588161       7 log.go:172] (0x4002efc8f0) Data frame received for 3
I0816 21:42:44.588265       7 log.go:172] (0x40021ecf00) (3) Data frame handling
I0816 21:42:44.588387       7 log.go:172] (0x4002efc8f0) Data frame received for 1
I0816 21:42:44.588460       7 log.go:172] (0x40021ece60) (1) Data frame handling
I0816 21:42:44.588542       7 log.go:172] (0x40021ece60) (1) Data frame sent
I0816 21:42:44.588627       7 log.go:172] (0x4002efc8f0) (0x40021ece60) Stream removed, broadcasting: 1
I0816 21:42:44.588807       7 log.go:172] (0x4002efc8f0) Go away received
I0816 21:42:44.589070       7 log.go:172] (0x4002efc8f0) (0x40021ece60) Stream removed, broadcasting: 1
I0816 21:42:44.589148       7 log.go:172] (0x4002efc8f0) (0x40021ecf00) Stream removed, broadcasting: 3
I0816 21:42:44.589211       7 log.go:172] (0x4002efc8f0) (0x40011d5900) Stream removed, broadcasting: 5
Aug 16 21:42:44.589: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:42:44.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5127" for this suite.

• [SLOW TEST:61.868 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4367,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:42:44.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Aug 16 21:42:45.628: INFO: Waiting up to 5m0s for pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b" in namespace "var-expansion-5317" to be "success or failure"
Aug 16 21:42:45.900: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b": Phase="Pending", Reason="", readiness=false. Elapsed: 271.82409ms
Aug 16 21:42:48.462: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.833123763s
Aug 16 21:42:50.647: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.018523767s
Aug 16 21:42:52.740: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.111402138s
Aug 16 21:42:54.942: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.313544869s
Aug 16 21:42:57.412: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.783375398s
Aug 16 21:43:00.189: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.560245067s
Aug 16 21:43:02.721: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.092640008s
STEP: Saw pod success
Aug 16 21:43:02.722: INFO: Pod "var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b" satisfied condition "success or failure"
Aug 16 21:43:03.042: INFO: Trying to get logs from node jerma-worker pod var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b container dapi-container: 
STEP: delete the pod
Aug 16 21:43:03.428: INFO: Waiting for pod var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b to disappear
Aug 16 21:43:04.307: INFO: Pod var-expansion-419303f7-34c3-4c1a-aefe-67b5b100a33b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:43:04.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5317" for this suite.

• [SLOW TEST:19.893 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4370,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:43:04.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-543
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 16 21:43:06.236: INFO: Found 0 stateful pods, waiting for 3
Aug 16 21:43:16.243: INFO: Found 2 stateful pods, waiting for 3
Aug 16 21:43:26.242: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 16 21:43:26.242: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 16 21:43:26.242: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 16 21:43:26.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-543 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 16 21:43:27.691: INFO: stderr: "I0816 21:43:27.521061    4439 log.go:172] (0x4000ad2bb0) (0x4000b2e000) Create stream\nI0816 21:43:27.523984    4439 log.go:172] (0x4000ad2bb0) (0x4000b2e000) Stream added, broadcasting: 1\nI0816 21:43:27.534457    4439 log.go:172] (0x4000ad2bb0) Reply frame received for 1\nI0816 21:43:27.535141    4439 log.go:172] (0x4000ad2bb0) (0x4000807a40) Create stream\nI0816 21:43:27.535216    4439 log.go:172] (0x4000ad2bb0) (0x4000807a40) Stream added, broadcasting: 3\nI0816 21:43:27.536424    4439 log.go:172] (0x4000ad2bb0) Reply frame received for 3\nI0816 21:43:27.536646    4439 log.go:172] (0x4000ad2bb0) (0x40009fa000) Create stream\nI0816 21:43:27.536694    4439 log.go:172] (0x4000ad2bb0) (0x40009fa000) Stream added, broadcasting: 5\nI0816 21:43:27.537887    4439 log.go:172] (0x4000ad2bb0) Reply frame received for 5\nI0816 21:43:27.580146    4439 log.go:172] (0x4000ad2bb0) Data frame received for 5\nI0816 21:43:27.580481    4439 log.go:172] (0x40009fa000) (5) Data frame handling\nI0816 21:43:27.581404    4439 log.go:172] (0x40009fa000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 21:43:27.673494    4439 log.go:172] (0x4000ad2bb0) Data frame received for 5\nI0816 21:43:27.673621    4439 log.go:172] (0x40009fa000) (5) Data frame handling\nI0816 21:43:27.673796    4439 log.go:172] (0x4000ad2bb0) Data frame received for 3\nI0816 21:43:27.673978    4439 log.go:172] (0x4000807a40) (3) Data frame handling\nI0816 21:43:27.674120    4439 log.go:172] (0x4000807a40) (3) Data frame sent\nI0816 21:43:27.674294    4439 log.go:172] (0x4000ad2bb0) Data frame received for 3\nI0816 21:43:27.674419    4439 log.go:172] (0x4000807a40) (3) Data frame handling\nI0816 21:43:27.674831    4439 log.go:172] (0x4000ad2bb0) Data frame received for 1\nI0816 21:43:27.674873    4439 log.go:172] (0x4000b2e000) (1) Data frame handling\nI0816 21:43:27.674937    4439 log.go:172] (0x4000b2e000) (1) Data frame sent\nI0816 21:43:27.677091    4439 log.go:172] (0x4000ad2bb0) (0x4000b2e000) Stream removed, broadcasting: 1\nI0816 21:43:27.678179    4439 log.go:172] (0x4000ad2bb0) Go away received\nI0816 21:43:27.682651    4439 log.go:172] (0x4000ad2bb0) (0x4000b2e000) Stream removed, broadcasting: 1\nI0816 21:43:27.683117    4439 log.go:172] (0x4000ad2bb0) (0x4000807a40) Stream removed, broadcasting: 3\nI0816 21:43:27.683455    4439 log.go:172] (0x4000ad2bb0) (0x40009fa000) Stream removed, broadcasting: 5\n"
Aug 16 21:43:27.692: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 16 21:43:27.692: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 16 21:43:37.730: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 16 21:43:47.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-543 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 16 21:43:49.350: INFO: stderr: "I0816 21:43:49.255797    4461 log.go:172] (0x400011d600) (0x40007f9a40) Create stream\nI0816 21:43:49.260751    4461 log.go:172] (0x400011d600) (0x40007f9a40) Stream added, broadcasting: 1\nI0816 21:43:49.272035    4461 log.go:172] (0x400011d600) Reply frame received for 1\nI0816 21:43:49.272844    4461 log.go:172] (0x400011d600) (0x4000b66000) Create stream\nI0816 21:43:49.272925    4461 log.go:172] (0x400011d600) (0x4000b66000) Stream added, broadcasting: 3\nI0816 21:43:49.275316    4461 log.go:172] (0x400011d600) Reply frame received for 3\nI0816 21:43:49.275555    4461 log.go:172] (0x400011d600) (0x40007f9c20) Create stream\nI0816 21:43:49.275601    4461 log.go:172] (0x400011d600) (0x40007f9c20) Stream added, broadcasting: 5\nI0816 21:43:49.276605    4461 log.go:172] (0x400011d600) Reply frame received for 5\nI0816 21:43:49.334670    4461 log.go:172] (0x400011d600) Data frame received for 5\nI0816 21:43:49.335029    4461 log.go:172] (0x400011d600) Data frame received for 1\nI0816 21:43:49.335249    4461 log.go:172] (0x40007f9a40) (1) Data frame handling\nI0816 21:43:49.335469    4461 log.go:172] (0x400011d600) Data frame received for 3\nI0816 21:43:49.335566    4461 log.go:172] (0x4000b66000) (3) Data frame handling\nI0816 21:43:49.335631    4461 log.go:172] (0x40007f9c20) (5) Data frame handling\nI0816 21:43:49.336491    4461 log.go:172] (0x4000b66000) (3) Data frame sent\nI0816 21:43:49.336628    4461 log.go:172] (0x40007f9c20) (5) Data frame sent\nI0816 21:43:49.336868    4461 log.go:172] (0x400011d600) Data frame received for 3\nI0816 21:43:49.336921    4461 log.go:172] (0x4000b66000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 21:43:49.337178    4461 log.go:172] (0x40007f9a40) (1) Data frame sent\nI0816 21:43:49.337844    4461 log.go:172] (0x400011d600) Data frame received for 5\nI0816 21:43:49.337887    4461 log.go:172] (0x40007f9c20) (5) Data frame handling\nI0816 21:43:49.338681    4461 log.go:172] (0x400011d600) (0x40007f9a40) Stream removed, broadcasting: 1\nI0816 21:43:49.342031    4461 log.go:172] (0x400011d600) (0x40007f9a40) Stream removed, broadcasting: 1\nI0816 21:43:49.342346    4461 log.go:172] (0x400011d600) Go away received\nI0816 21:43:49.342407    4461 log.go:172] (0x400011d600) (0x4000b66000) Stream removed, broadcasting: 3\nI0816 21:43:49.343067    4461 log.go:172] (0x400011d600) (0x40007f9c20) Stream removed, broadcasting: 5\n"
Aug 16 21:43:49.352: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 16 21:43:49.352: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 16 21:44:09.411: INFO: Waiting for StatefulSet statefulset-543/ss2 to complete update
Aug 16 21:44:09.412: INFO: Waiting for Pod statefulset-543/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Aug 16 21:44:19.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-543 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 16 21:44:21.247: INFO: stderr: "I0816 21:44:21.036381    4485 log.go:172] (0x400011c370) (0x40009ac0a0) Create stream\nI0816 21:44:21.041582    4485 log.go:172] (0x400011c370) (0x40009ac0a0) Stream added, broadcasting: 1\nI0816 21:44:21.053597    4485 log.go:172] (0x400011c370) Reply frame received for 1\nI0816 21:44:21.054129    4485 log.go:172] (0x400011c370) (0x4000811ae0) Create stream\nI0816 21:44:21.054197    4485 log.go:172] (0x400011c370) (0x4000811ae0) Stream added, broadcasting: 3\nI0816 21:44:21.055241    4485 log.go:172] (0x400011c370) Reply frame received for 3\nI0816 21:44:21.055494    4485 log.go:172] (0x400011c370) (0x400097a000) Create stream\nI0816 21:44:21.055552    4485 log.go:172] (0x400011c370) (0x400097a000) Stream added, broadcasting: 5\nI0816 21:44:21.056808    4485 log.go:172] (0x400011c370) Reply frame received for 5\nI0816 21:44:21.098198    4485 log.go:172] (0x400011c370) Data frame received for 5\nI0816 21:44:21.098383    4485 log.go:172] (0x400097a000) (5) Data frame handling\nI0816 21:44:21.098766    4485 log.go:172] (0x400097a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0816 21:44:21.227644    4485 log.go:172] (0x400011c370) Data frame received for 3\nI0816 21:44:21.227768    4485 log.go:172] (0x4000811ae0) (3) Data frame handling\nI0816 21:44:21.227872    4485 log.go:172] (0x4000811ae0) (3) Data frame sent\nI0816 21:44:21.228028    4485 log.go:172] (0x400011c370) Data frame received for 5\nI0816 21:44:21.228184    4485 log.go:172] (0x400097a000) (5) Data frame handling\nI0816 21:44:21.228341    4485 log.go:172] (0x400011c370) Data frame received for 3\nI0816 21:44:21.228479    4485 log.go:172] (0x4000811ae0) (3) Data frame handling\nI0816 21:44:21.229741    4485 log.go:172] (0x400011c370) Data frame received for 1\nI0816 21:44:21.229884    4485 log.go:172] (0x40009ac0a0) (1) Data frame handling\nI0816 21:44:21.230014    4485 log.go:172] (0x40009ac0a0) (1) Data frame sent\nI0816 21:44:21.231452    4485 log.go:172] (0x400011c370) (0x40009ac0a0) Stream removed, broadcasting: 1\nI0816 21:44:21.234333    4485 log.go:172] (0x400011c370) Go away received\nI0816 21:44:21.238197    4485 log.go:172] (0x400011c370) (0x40009ac0a0) Stream removed, broadcasting: 1\nI0816 21:44:21.238776    4485 log.go:172] (0x400011c370) (0x4000811ae0) Stream removed, broadcasting: 3\nI0816 21:44:21.239189    4485 log.go:172] (0x400011c370) (0x400097a000) Stream removed, broadcasting: 5\n"
Aug 16 21:44:21.248: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 16 21:44:21.248: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 16 21:44:21.317: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 16 21:44:31.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-543 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 16 21:44:32.752: INFO: stderr: "I0816 21:44:32.660164    4507 log.go:172] (0x40006ce000) (0x4000aac000) Create stream\nI0816 21:44:32.665043    4507 log.go:172] (0x40006ce000) (0x4000aac000) Stream added, broadcasting: 1\nI0816 21:44:32.677652    4507 log.go:172] (0x40006ce000) Reply frame received for 1\nI0816 21:44:32.678826    4507 log.go:172] (0x40006ce000) (0x40007fe000) Create stream\nI0816 21:44:32.678926    4507 log.go:172] (0x40006ce000) (0x40007fe000) Stream added, broadcasting: 3\nI0816 21:44:32.680944    4507 log.go:172] (0x40006ce000) Reply frame received for 3\nI0816 21:44:32.681242    4507 log.go:172] (0x40006ce000) (0x40007fe0a0) Create stream\nI0816 21:44:32.681306    4507 log.go:172] (0x40006ce000) (0x40007fe0a0) Stream added, broadcasting: 5\nI0816 21:44:32.682667    4507 log.go:172] (0x40006ce000) Reply frame received for 5\nI0816 21:44:32.735038    4507 log.go:172] (0x40006ce000) Data frame received for 5\nI0816 21:44:32.735198    4507 log.go:172] (0x40006ce000) Data frame received for 1\nI0816 21:44:32.735314    4507 log.go:172] (0x40007fe0a0) (5) Data frame handling\nI0816 21:44:32.735504    4507 log.go:172] (0x4000aac000) (1) Data frame handling\nI0816 21:44:32.735768    4507 log.go:172] (0x40006ce000) Data frame received for 3\nI0816 21:44:32.735868    4507 log.go:172] (0x40007fe000) (3) Data frame handling\nI0816 21:44:32.735985    4507 log.go:172] (0x40007fe000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0816 21:44:32.736178    4507 log.go:172] (0x40007fe0a0) (5) Data frame sent\nI0816 21:44:32.736397    4507 log.go:172] (0x4000aac000) (1) Data frame sent\nI0816 21:44:32.736620    4507 log.go:172] (0x40006ce000) Data frame received for 3\nI0816 21:44:32.736899    4507 log.go:172] (0x40007fe000) (3) Data frame handling\nI0816 21:44:32.737105    4507 log.go:172] (0x40006ce000) Data frame received for 5\nI0816 21:44:32.737202    4507 log.go:172] (0x40007fe0a0) (5) Data frame handling\nI0816 21:44:32.739263    4507 log.go:172] (0x40006ce000) (0x4000aac000) Stream removed, broadcasting: 1\nI0816 21:44:32.742890    4507 log.go:172] (0x40006ce000) Go away received\nI0816 21:44:32.745579    4507 log.go:172] (0x40006ce000) (0x4000aac000) Stream removed, broadcasting: 1\nI0816 21:44:32.745954    4507 log.go:172] (0x40006ce000) (0x40007fe000) Stream removed, broadcasting: 3\nI0816 21:44:32.746356    4507 log.go:172] (0x40006ce000) (0x40007fe0a0) Stream removed, broadcasting: 5\n"
Aug 16 21:44:32.753: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 16 21:44:32.753: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 16 21:45:02.784: INFO: Waiting for StatefulSet statefulset-543/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 16 21:45:12.797: INFO: Deleting all statefulset in ns statefulset-543
Aug 16 21:45:12.801: INFO: Scaling statefulset ss2 to 0
Aug 16 21:45:32.826: INFO: Waiting for statefulset status.replicas updated to 0
Aug 16 21:45:32.831: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:45:32.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-543" for this suite.

• [SLOW TEST:148.278 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":262,"skipped":4371,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:45:32.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:45:49.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4347" for this suite.

• [SLOW TEST:16.458 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":263,"skipped":4373,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:45:49.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:45:50.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3439" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":264,"skipped":4377,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:45:50.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 16 21:45:58.388: INFO: 10 pods remaining
Aug 16 21:45:58.388: INFO: 0 pods has nil DeletionTimestamp
Aug 16 21:45:58.389: INFO: 
Aug 16 21:45:59.167: INFO: 0 pods remaining
Aug 16 21:45:59.167: INFO: 0 pods has nil DeletionTimestamp
Aug 16 21:45:59.167: INFO: 
Aug 16 21:45:59.885: INFO: 0 pods remaining
Aug 16 21:45:59.885: INFO: 0 pods has nil DeletionTimestamp
Aug 16 21:45:59.885: INFO: 
STEP: Gathering metrics
W0816 21:46:01.185621       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 16 21:46:01.185: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:46:01.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-361" for this suite.

• [SLOW TEST:11.268 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":265,"skipped":4379,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:46:01.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-372f3de4-34a1-49c8-9cb0-27f678baabfb
STEP: Creating a pod to test consume secrets
Aug 16 21:46:02.582: INFO: Waiting up to 5m0s for pod "pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4" in namespace "secrets-5575" to be "success or failure"
Aug 16 21:46:02.720: INFO: Pod "pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4": Phase="Pending", Reason="", readiness=false. Elapsed: 137.688923ms
Aug 16 21:46:04.728: INFO: Pod "pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14569269s
Aug 16 21:46:06.776: INFO: Pod "pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193725581s
Aug 16 21:46:08.782: INFO: Pod "pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199610522s
STEP: Saw pod success
Aug 16 21:46:08.782: INFO: Pod "pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4" satisfied condition "success or failure"
Aug 16 21:46:08.786: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4 container secret-volume-test: 
STEP: delete the pod
Aug 16 21:46:08.814: INFO: Waiting for pod pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4 to disappear
Aug 16 21:46:08.837: INFO: Pod pod-secrets-2292ecb7-7b44-46aa-af30-58dcd82325f4 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:46:08.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5575" for this suite.
STEP: Destroying namespace "secret-namespace-4085" for this suite.

• [SLOW TEST:7.722 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4391,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:46:09.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:46:33.299: INFO: Container started at 2020-08-16 21:46:14 +0000 UTC, pod became ready at 2020-08-16 21:46:31 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:46:33.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5847" for this suite.

• [SLOW TEST:24.066 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4391,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:46:33.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:46:42.047: INFO: Waiting up to 5m0s for pod "client-envvars-4efba6d3-7829-4828-b6ff-64e14afe6489" in namespace "pods-4432" to be "success or failure"
Aug 16 21:46:42.053: INFO: Pod "client-envvars-4efba6d3-7829-4828-b6ff-64e14afe6489": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083088ms
Aug 16 21:46:44.060: INFO: Pod "client-envvars-4efba6d3-7829-4828-b6ff-64e14afe6489": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013149072s
Aug 16 21:46:46.066: INFO: Pod "client-envvars-4efba6d3-7829-4828-b6ff-64e14afe6489": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019069284s
STEP: Saw pod success
Aug 16 21:46:46.067: INFO: Pod "client-envvars-4efba6d3-7829-4828-b6ff-64e14afe6489" satisfied condition "success or failure"
Aug 16 21:46:46.070: INFO: Trying to get logs from node jerma-worker pod client-envvars-4efba6d3-7829-4828-b6ff-64e14afe6489 container env3cont: 
STEP: delete the pod
Aug 16 21:46:46.115: INFO: Waiting for pod client-envvars-4efba6d3-7829-4828-b6ff-64e14afe6489 to disappear
Aug 16 21:46:46.120: INFO: Pod client-envvars-4efba6d3-7829-4828-b6ff-64e14afe6489 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:46:46.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4432" for this suite.

• [SLOW TEST:12.812 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4443,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:46:46.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 16 21:46:52.927: INFO: Successfully updated pod "adopt-release-t747s"
STEP: Checking that the Job readopts the Pod
Aug 16 21:46:52.927: INFO: Waiting up to 15m0s for pod "adopt-release-t747s" in namespace "job-8179" to be "adopted"
Aug 16 21:46:52.949: INFO: Pod "adopt-release-t747s": Phase="Running", Reason="", readiness=true. Elapsed: 21.941771ms
Aug 16 21:46:54.955: INFO: Pod "adopt-release-t747s": Phase="Running", Reason="", readiness=true. Elapsed: 2.028323777s
Aug 16 21:46:54.956: INFO: Pod "adopt-release-t747s" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 16 21:46:55.472: INFO: Successfully updated pod "adopt-release-t747s"
STEP: Checking that the Job releases the Pod
Aug 16 21:46:55.472: INFO: Waiting up to 15m0s for pod "adopt-release-t747s" in namespace "job-8179" to be "released"
Aug 16 21:46:55.496: INFO: Pod "adopt-release-t747s": Phase="Running", Reason="", readiness=true. Elapsed: 23.732613ms
Aug 16 21:46:55.497: INFO: Pod "adopt-release-t747s" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:46:55.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8179" for this suite.

• [SLOW TEST:9.435 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":269,"skipped":4445,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:46:55.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4918.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4918.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4918.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4918.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4918.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4918.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 16 21:47:03.785: INFO: DNS probes using dns-4918/dns-test-7d51fc65-2093-4819-a7ef-f461c6d67c84 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:47:04.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4918" for this suite.

• [SLOW TEST:8.539 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":270,"skipped":4473,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:47:04.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 16 21:47:04.591: INFO: Created pod &Pod{ObjectMeta:{dns-6172  dns-6172 /api/v1/namespaces/dns-6172/pods/dns-6172 f623360e-d541-4a3b-abd9-996f3160bd2a 516193 0 2020-08-16 21:47:04 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rr6fd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rr6fd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rr6fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 16 21:47:10.619: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6172 PodName:dns-6172 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:47:10.620: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:47:10.681430       7 log.go:172] (0x4002be3c30) (0x4001978c80) Create stream
I0816 21:47:10.681590       7 log.go:172] (0x4002be3c30) (0x4001978c80) Stream added, broadcasting: 1
I0816 21:47:10.684880       7 log.go:172] (0x4002be3c30) Reply frame received for 1
I0816 21:47:10.685061       7 log.go:172] (0x4002be3c30) (0x4001978e60) Create stream
I0816 21:47:10.685136       7 log.go:172] (0x4002be3c30) (0x4001978e60) Stream added, broadcasting: 3
I0816 21:47:10.686237       7 log.go:172] (0x4002be3c30) Reply frame received for 3
I0816 21:47:10.686370       7 log.go:172] (0x4002be3c30) (0x4001978fa0) Create stream
I0816 21:47:10.686445       7 log.go:172] (0x4002be3c30) (0x4001978fa0) Stream added, broadcasting: 5
I0816 21:47:10.687589       7 log.go:172] (0x4002be3c30) Reply frame received for 5
I0816 21:47:10.747126       7 log.go:172] (0x4002be3c30) Data frame received for 3
I0816 21:47:10.747274       7 log.go:172] (0x4001978e60) (3) Data frame handling
I0816 21:47:10.747443       7 log.go:172] (0x4001978e60) (3) Data frame sent
I0816 21:47:10.748977       7 log.go:172] (0x4002be3c30) Data frame received for 3
I0816 21:47:10.749147       7 log.go:172] (0x4001978e60) (3) Data frame handling
I0816 21:47:10.749302       7 log.go:172] (0x4002be3c30) Data frame received for 5
I0816 21:47:10.749468       7 log.go:172] (0x4001978fa0) (5) Data frame handling
I0816 21:47:10.751288       7 log.go:172] (0x4002be3c30) Data frame received for 1
I0816 21:47:10.751376       7 log.go:172] (0x4001978c80) (1) Data frame handling
I0816 21:47:10.751487       7 log.go:172] (0x4001978c80) (1) Data frame sent
I0816 21:47:10.751614       7 log.go:172] (0x4002be3c30) (0x4001978c80) Stream removed, broadcasting: 1
I0816 21:47:10.751733       7 log.go:172] (0x4002be3c30) Go away received
I0816 21:47:10.752211       7 log.go:172] (0x4002be3c30) (0x4001978c80) Stream removed, broadcasting: 1
I0816 21:47:10.752350       7 log.go:172] (0x4002be3c30) (0x4001978e60) Stream removed, broadcasting: 3
I0816 21:47:10.752467       7 log.go:172] (0x4002be3c30) (0x4001978fa0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 16 21:47:10.753: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6172 PodName:dns-6172 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 16 21:47:10.753: INFO: >>> kubeConfig: /root/.kube/config
I0816 21:47:10.819021       7 log.go:172] (0x400461a420) (0x400146f2c0) Create stream
I0816 21:47:10.819142       7 log.go:172] (0x400461a420) (0x400146f2c0) Stream added, broadcasting: 1
I0816 21:47:10.822305       7 log.go:172] (0x400461a420) Reply frame received for 1
I0816 21:47:10.822508       7 log.go:172] (0x400461a420) (0x400146f400) Create stream
I0816 21:47:10.822629       7 log.go:172] (0x400461a420) (0x400146f400) Stream added, broadcasting: 3
I0816 21:47:10.824320       7 log.go:172] (0x400461a420) Reply frame received for 3
I0816 21:47:10.824440       7 log.go:172] (0x400461a420) (0x400121efa0) Create stream
I0816 21:47:10.824515       7 log.go:172] (0x400461a420) (0x400121efa0) Stream added, broadcasting: 5
I0816 21:47:10.826322       7 log.go:172] (0x400461a420) Reply frame received for 5
I0816 21:47:10.895681       7 log.go:172] (0x400461a420) Data frame received for 3
I0816 21:47:10.895870       7 log.go:172] (0x400146f400) (3) Data frame handling
I0816 21:47:10.896024       7 log.go:172] (0x400146f400) (3) Data frame sent
I0816 21:47:10.897553       7 log.go:172] (0x400461a420) Data frame received for 3
I0816 21:47:10.897682       7 log.go:172] (0x400146f400) (3) Data frame handling
I0816 21:47:10.897810       7 log.go:172] (0x400461a420) Data frame received for 5
I0816 21:47:10.897925       7 log.go:172] (0x400121efa0) (5) Data frame handling
I0816 21:47:10.899708       7 log.go:172] (0x400461a420) Data frame received for 1
I0816 21:47:10.899851       7 log.go:172] (0x400146f2c0) (1) Data frame handling
I0816 21:47:10.899970       7 log.go:172] (0x400146f2c0) (1) Data frame sent
I0816 21:47:10.900093       7 log.go:172] (0x400461a420) (0x400146f2c0) Stream removed, broadcasting: 1
I0816 21:47:10.900234       7 log.go:172] (0x400461a420) Go away received
I0816 21:47:10.901030       7 log.go:172] (0x400461a420) (0x400146f2c0) Stream removed, broadcasting: 1
I0816 21:47:10.901139       7 log.go:172] (0x400461a420) (0x400146f400) Stream removed, broadcasting: 3
I0816 21:47:10.901221       7 log.go:172] (0x400461a420) (0x400121efa0) Stream removed, broadcasting: 5
Aug 16 21:47:10.901: INFO: Deleting pod dns-6172...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:47:10.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6172" for this suite.

• [SLOW TEST:6.852 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":271,"skipped":4487,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:47:10.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 16 21:47:13.527: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 16 21:47:15.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733211233, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733211233, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733211233, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733211233, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 16 21:47:18.699: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:47:18.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:47:19.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4200" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:9.151 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":272,"skipped":4488,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:47:20.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 16 21:47:20.198: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:47:24.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8079" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":273,"skipped":4503,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:47:24.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-361196d4-f651-49e1-993b-573983f1551b in namespace container-probe-2114
Aug 16 21:47:28.558: INFO: Started pod liveness-361196d4-f651-49e1-993b-573983f1551b in namespace container-probe-2114
STEP: checking the pod's current state and verifying that restartCount is present
Aug 16 21:47:28.563: INFO: Initial restart count of pod liveness-361196d4-f651-49e1-993b-573983f1551b is 0
Aug 16 21:47:46.641: INFO: Restart count of pod container-probe-2114/liveness-361196d4-f651-49e1-993b-573983f1551b is now 1 (18.078248799s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:47:46.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2114" for this suite.

• [SLOW TEST:22.240 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4532,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:47:46.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 16 21:47:52.146: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:47:52.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9734" for this suite.

• [SLOW TEST:5.496 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4555,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:47:52.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Aug 16 21:47:52.259: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 16 21:47:52.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8500'
Aug 16 21:47:54.067: INFO: stderr: ""
Aug 16 21:47:54.067: INFO: stdout: "service/agnhost-slave created\n"
Aug 16 21:47:54.068: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 16 21:47:54.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8500'
Aug 16 21:47:55.806: INFO: stderr: ""
Aug 16 21:47:55.806: INFO: stdout: "service/agnhost-master created\n"
Aug 16 21:47:55.808: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 16 21:47:55.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8500'
Aug 16 21:47:57.485: INFO: stderr: ""
Aug 16 21:47:57.485: INFO: stdout: "service/frontend created\n"
Aug 16 21:47:57.487: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 16 21:47:57.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8500'
Aug 16 21:47:59.040: INFO: stderr: ""
Aug 16 21:47:59.040: INFO: stdout: "deployment.apps/frontend created\n"
Aug 16 21:47:59.042: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 16 21:47:59.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8500'
Aug 16 21:48:00.608: INFO: stderr: ""
Aug 16 21:48:00.608: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 16 21:48:00.609: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 16 21:48:00.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8500'
Aug 16 21:48:03.042: INFO: stderr: ""
Aug 16 21:48:03.042: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 16 21:48:03.042: INFO: Waiting for all frontend pods to be Running.
Aug 16 21:48:08.094: INFO: Waiting for frontend to serve content.
Aug 16 21:48:09.137: INFO: Trying to add a new entry to the guestbook.
Aug 16 21:48:09.150: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 16 21:48:09.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8500'
Aug 16 21:48:10.639: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 16 21:48:10.639: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 16 21:48:10.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8500'
Aug 16 21:48:11.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 16 21:48:11.893: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 16 21:48:11.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8500'
Aug 16 21:48:13.199: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 16 21:48:13.199: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 16 21:48:13.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8500'
Aug 16 21:48:14.449: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 16 21:48:14.450: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 16 21:48:14.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8500'
Aug 16 21:48:15.888: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 16 21:48:15.889: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 16 21:48:15.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8500'
Aug 16 21:48:17.170: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 16 21:48:17.170: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:48:17.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8500" for this suite.

• [SLOW TEST:24.995 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":276,"skipped":4555,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 16 21:48:17.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 16 21:48:17.617: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bf58a78-ecdf-4b6d-8489-97b6449a27b9" in namespace "projected-747" to be "success or failure"
Aug 16 21:48:17.646: INFO: Pod "downwardapi-volume-2bf58a78-ecdf-4b6d-8489-97b6449a27b9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.419376ms
Aug 16 21:48:19.652: INFO: Pod "downwardapi-volume-2bf58a78-ecdf-4b6d-8489-97b6449a27b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034529864s
Aug 16 21:48:21.668: INFO: Pod "downwardapi-volume-2bf58a78-ecdf-4b6d-8489-97b6449a27b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049810654s
STEP: Saw pod success
Aug 16 21:48:21.668: INFO: Pod "downwardapi-volume-2bf58a78-ecdf-4b6d-8489-97b6449a27b9" satisfied condition "success or failure"
Aug 16 21:48:21.879: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2bf58a78-ecdf-4b6d-8489-97b6449a27b9 container client-container: 
STEP: delete the pod
Aug 16 21:48:22.258: INFO: Waiting for pod downwardapi-volume-2bf58a78-ecdf-4b6d-8489-97b6449a27b9 to disappear
Aug 16 21:48:22.266: INFO: Pod downwardapi-volume-2bf58a78-ecdf-4b6d-8489-97b6449a27b9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 16 21:48:22.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-747" for this suite.

• [SLOW TEST:5.128 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4560,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSAug 16 21:48:22.311: INFO: Running AfterSuite actions on all nodes
Aug 16 21:48:22.313: INFO: Running AfterSuite actions on node 1
Aug 16 21:48:22.313: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4566,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs  [Conformance] 
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399

Ran 278 of 4844 Specs in 6320.742 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4566 Skipped
--- FAIL: TestE2E (6321.49s)
FAIL